Current Status:

  • Sync: Online
  • IFTTT: Online

Edit March 21, 2018 10:43 PM MDT

Sync is back online! We are extremely sorry for any inconvenience that this may have caused. If necessary, please sign out and back in to Day One Sync. As was mentioned before, we believe this to be an isolated incident. Users can check our sync status dashboards via the links at the bottom of this guide. 

Please contact support if you have any outstanding questions or concerns. 

Edit March 21, 2018 11:00 AM MDT:

After the cluster re-balance and re-index that we mentioned in our previous status update, it became apparent that, though the database reported having completed the rebalance successfully, the database was not in a functioning stat and not accepting write requests from our application. This meant that dayone.me could read data, and that app.dayone.me could be used to read journal data, but sync couldn't be successfully completed.

We're now taking steps to restore the database to a fully functional state, but these steps will likely take hours to complete.

Again, we sincerely apologize and thank you for your patience.

Edit March 21, 2018 12:29 AM MDT

At the beginning of this year, we had scheduled to upgrade our primary database cluster to the most recent version. As this is a large cluster, and the process is very delicate and involved, the work was scheduled across multiple phases over a three month period. It was also determined, that in the process of rebuilding the database cluster, we would overbuild the capacity to ensure that new functionality, inherent in the new version, wouldn't overwhelm the cluster. That process began and ended without any significant interruptions.

Today, the newly upgraded cluster has been running successfully for over a month. The collected data indeed confirms that the cluster is running underutilized and overcapacity. Thus, the process to remove excessive nodes began. The process followed, was deemed to be safe according the vendor documentation, however, it quickly became apparent that this was not a safe operation. Immediately, all of the underlying nodes were placed back in the cluster and the cluster required a full re-balance, and re-index. This, unfortunately, is a very long process that lasted approximately 9 hours from start to finish. 

We can assure you that this was an isolated incident, and procedures have been put in place to prevent this from happening again. Proper care and thorough measures are taken to ensure the health and safe-keeping of the data warehouse. 

We sincerely apologize for the inconvenience this has caused you, and deeply appreciate your patience in these matters.

Status Dashboards:

Did this answer your question?