<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Natasha Collins, Author at The SERO Group</title>
	<atom:link href="https://theserogroup.com/author/natashac/feed/" rel="self" type="application/rss+xml" />
	<link>https://theserogroup.com/author/natashac/</link>
	<description>SQL Servers Healthy, Secure, And Reliable</description>
	<lastBuildDate>Fri, 26 Jul 2024 22:51:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>

 
<site xmlns="com-wordpress:feed-additions:1">121220030</site>	<item>
		<title>What’s in a Job Title? Understanding Changing Data Roles</title>
		<link>https://theserogroup.com/data-strategy/different-data-roles/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Wed, 31 Jul 2024 12:00:00 +0000</pubDate>
				<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Training]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=6350</guid>

					<description><![CDATA[<p>The world of data is rapidly evolving, and the demand for skilled data professionals has continued to rise. But who are these data professionals? Those of us in the field have been asked many times about the nature of what we do. Students and prospective career changers, hiring managers, business partners, and prospective clients all&#8230; <br /> <a class="read-more" href="https://theserogroup.com/data-strategy/different-data-roles/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/data-strategy/different-data-roles/">What’s in a Job Title? Understanding Changing Data Roles</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The world of data is rapidly evolving, and the demand for skilled data professionals has continued to rise. But who are these data professionals? Those of us in the field have been asked many times about the nature of what we do. Students and prospective career changers, hiring managers, business partners, and prospective clients all have questions about what falls within the expertise of a “data professional”.</p>



<p>The answer is not simple.</p>



<p>Data roles are diverse and constantly evolving. Similarly, the lines that separate data disciplines are inherently blurred. The reality faced by active data professionals is also complex, with business and project requirements often requiring them to extend their expertise across disciplines. Consequently, data professionals often wear many hats.</p>



<p>Still, it&#8217;s useful for those entering the field or looking to hire a data professional to understand some of the important distinctions between data disciplines.</p>



<p>Here are just a few.</p>



<h2 class="wp-block-heading">Common Data Roles</h2>



<h3 class="wp-block-heading"><strong>Data Architect</strong></h3>



<p>Data architects design the overall blueprint for your organization&#8217;s data environment. They define how data is stored, organized, integrated, and accessed across systems. They also ensure that your data infrastructure is scalable, secure, architected for efficient retrieval, and aligns with your long-term business goals.</p>



<h3 class="wp-block-heading"><strong>Database Administrator (DBA)</strong></h3>



<p>DBAs are the administrators of your database systems. They possess a deep knowledge of the database engine itself, including all its native functionality and features. They are also responsible for keeping databases updated, backed up, secure, and performing optimally. DBAs also manage database upgrades and migrations, as well as database recovery in a disaster or emergency.</p>



<h3 class="wp-block-heading"><strong>Data Engineer</strong></h3>



<p>Think of data engineers as the builders of your data infrastructure. They design, construct, and maintain the pipelines that collect, store, and process your data. Their toolkit is diverse and often includes programming languages like Python and SQL. It is also common to leverage cloud platforms like Azure, AWS, or GCP. These professionals ensure that your data is accessible, reliable, and ready for analysis.</p>



<h3 class="wp-block-heading"><strong>Data Analyst / Business Intelligence Analyst</strong></h3>



<p>Data analysts are the storytellers of the data world. They take the data that engineers have prepared and draw out insights through creating reports, dashboards, and visualizations. Furthermore, their strong analytical skills and ability to recognize patterns allow them to turn data into knowledge. They tell stories with data to help guide decisions. Tools often include Excel and BI reporting platforms like Tableau or Power BI.</p>



<h3 class="wp-block-heading"><strong>Data Scientist</strong></h3>



<p>Data scientists take analytics to a deeper level. They use advanced statistical techniques and machine learning to uncover hidden patterns that are often difficult for humans to detect. Similarly, the algorithms used by data scientists can predict future trends, and the outputs of their models are used to drive decision-making. Data scientists possess mathematical expertise, programming skills in languages like Python or R, and domain knowledge relevant to their industry.</p>



<h3 class="wp-block-heading"><strong>Machine Learning Engineer</strong></h3>



<p>Machine learning engineers take the models created by data scientists and make them operational in real-world production environments. They leverage both data science and software engineering skills. ML engineers are responsible for building the systems that deploy, monitor, and scale data science models. They also manage these systems to ensure that they deliver accurate and timely predictions once deployed in a real-world context.</p>



<h2 class="wp-block-heading" id="h-tips-for-choosing-your-future-data-role">Tips for Choosing Your Future Data Role</h2>



<p>Shadowing some different data professionals is a great way to get started. Depending on which role appeals to you, there are different pathways to getting started.</p>



<p>If you have a background in system administration, data architecture and database administration<strong> </strong>may be good avenues to investigate. Likewise, people who have enjoyed building their own home labs may also enjoy these data roles. </p>



<p>Similarly, if you love automation and problem-solving for technical efficiency, one of the engineering roles may be right for you. Engineers enjoy designing and building solutions for technical challenges.</p>



<p>Finally, if you have a mind for analytics and statistics, an analyst or data science role may be a great fit. These roles uncover the root causes of the problems under investigation in order to unlock potential solutions.</p>



<h3 class="wp-block-heading" id="h-there-are-a-variety-of-ways-to-kick-start-your-journey-for-any-of-these-data-disciplines">There are a variety of ways to kick-start your journey for any of these data disciplines.</h3>



<ol class="wp-block-list">
<li><strong><a href="https://www.kdnuggets.com/2021/02/10-resources-data-science-self-study.html">Self-Education</a></strong>. There are many resources available online that can guide you through learning specialized skills. Set up a home lab environment in which to safely practice. Start to build a profile of projects and/or certifications that can showcase your new skills.</li>



<li><strong><a href="https://www.forbes.com/advisor/education/bootcamps/best-data-science-bootcamps/">Bootcamps</a>.</strong> The number and variety of bootcamps for data have increased dramatically in the last 5 years. If your schedule allows for participation in one of these intense programs, they can be a great way to upskill rapidly.</li>



<li><strong><a href="https://www.usnews.com/best-colleges/rankings/computer-science/data-analytics-science">Formal education</a>.</strong> If you are looking to go into a highly specialized role in a particular knowledge domain or industry, obtaining an advanced degree can be a great way to get started.</li>
</ol>



<p>Determining which path to take will depend on what appeals to you and on the circumstances in which you are beginning your journey. The key, however, is just taking the first step.</p>



<h2 class="wp-block-heading" id="h-tips-for-hiring-which-data-role-do-you-need">Tips for Hiring: Which Data Role Do You Need?</h2>



<p>Just as the path to becoming a data professional depends on individual circumstances, the type of data professional to hire will also depend on your organization’s needs and where you are in your data maturity journey.</p>



<p>Here are some hiring considerations for several common scenarios:</p>



<h3 class="wp-block-heading" id="h-just-starting-out"><strong>Just Starting Out</strong> </h3>



<p>If you are building your data capabilities from the ground up, a data engineer with versatile skills is a great first hire. While partnering with system administrators and business experts, as well as potentially seeking external assistance from a DBA or data architect, a data engineer can lay the foundation for your future data work.</p>



<h3 class="wp-block-heading" id="h-business-optimization-improved-operations"><strong>Business Optimization / Improved Operations</strong></h3>



<p>If you are looking to use your data for improved operations, a data or business intelligence analyst can work with business partners to track key metrics, identify trends, and drive data-driven decisions to find ways to improve. Likewise, by developing an understanding of the business, this person can work as a bridge between business and IT teams to help harness the full potential of your data.</p>



<h3 class="wp-block-heading" id="h-complex-or-outdated-data-environment"><strong>Complex or Outdated Data Environment</strong></h3>



<p>A data architect can bring order to chaos by ensuring that your data is well-organized, accessible, and scalable as your organization grows and expands. Data architects can also be instrumental in reshaping legacy structures to meet current business requirements and make the best use of modern technologies.</p>



<h3 class="wp-block-heading" id="h-predictive-insights-and-automation"><strong>Predictive Insights and Automation</strong></h3>



<p>Data scientists and machine learning engineers are ideal for building models that can make important predictions, optimize processes, and/or automate complex AI tasks. It is important to note, however, that having a robust, well-governed data infrastructure is a prerequisite for success for these types of initiatives.</p>



<h3 class="wp-block-heading" id="h-diverse-needs-large-scale-projects-or-complex-remediation"><strong>Diverse Needs, Large-Scale Projects, or Complex Remediation</strong></h3>



<p>Partnering with an external data team with diverse areas of expertise can be an affordable way to: remediate complex problems, make infrastructure improvements rapidly, or design and implement large-scale solutions. Leveraging this type of support also allows you to access professionals with expertise in different data roles and may offer flexibility for scaling up or scaling down your level of support as needed.</p>



<h2 class="wp-block-heading" id="h-want-to-work-with-the-sero-group">Want to work with The SERO Group?</h2>



<p>Want to learn more about how SERO Group helps organizations take the guesswork out of managing their SQL estate? <a href="https://theserogroup.com/contact-us/" target="_blank" rel="noreferrer noopener">Schedule a no-obligation discovery call</a> with us to get started.</p>
<p>The post <a href="https://theserogroup.com/data-strategy/different-data-roles/">What’s in a Job Title? Understanding Changing Data Roles</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6350</post-id>	</item>
		<item>
		<title>Is your SQL Server Code Ready for Azure?</title>
		<link>https://theserogroup.com/azure/azure-sql-migration-code/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Wed, 26 Jun 2024 12:00:00 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[Database Development]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[Shared Disks]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Consultant]]></category>
		<category><![CDATA[SQL Security]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<category><![CDATA[TempDB]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=6078</guid>

					<description><![CDATA[<p>I recently had a discussion with a client that turned to the question of SQL Server code compatibility with Azure SQL Database. We were designing a new pipeline for their on-premises SQL environment, and they mentioned their abandoned cloud migration effort from a few years earlier. The business ended up pausing this effort because of&#8230; <br /> <a class="read-more" href="https://theserogroup.com/azure/azure-sql-migration-code/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/azure/azure-sql-migration-code/">Is your SQL Server Code Ready for Azure?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I recently had a discussion with a client that turned to the question of SQL Server code compatibility with Azure SQL Database. We were designing a new pipeline for their on-premises SQL environment, and they mentioned their abandoned cloud migration effort from a few years earlier. The business ended up pausing this effort because of code incompatibilities that would have required an unexpected amount of re-engineering. After that experience, they wanted to ensure that any new pipeline development was done while being mindful of a possible future migration.</p>



<h2 class="wp-block-heading" id="h-the-risks-of-delaying-code-analysis">The risks of delaying code analysis</h2>



<p>Delaying an analysis of code compatibility is surprisingly common for businesses undertaking a migration to the cloud. Early cost-benefit analyses often address the hardware and infrastructure changes involved but can sometimes neglect to consider the impact of required code changes.</p>



<p>Infrastructure concerns are critical considerations when evaluating a move to the cloud. However, limiting our analysis to these considerations may hide the costs and risks associated with any necessary re-engineering.&nbsp;Unfortunately, it is very possible for these hidden risks and costs to turn out to be deal-breakers. To avoid a sticky situation, learn about these factors up front before investing time and energy into migration preparations.</p>



<h2 class="wp-block-heading" id="h-azure-options-sql-vm-vs-managed-instance-vs-sql-database">Azure options: SQL VM vs. Managed Instance vs. SQL Database</h2>



<p>It is important to note that a “move to the cloud” can come in many forms. Some examples are migrations to a hybrid environment, “lift and shift” moves to Azure-hosted VMs or SQL Managed Instances (SQL MI), or full or partial migrations to multi-tenant Azure SQL Databases (SQL DB). You will need to know what type of migration is being considered before evaluating code changes since the different options have different levels of compatibility with SQL Server.<br><br>Here&#8217;s a quick breakdown of the options in Azure with SQL Server compatibility.</p>



<h3 class="wp-block-heading">IaaS Option</h3>



<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview?view=azuresql">SQL Server on an Azure VM</a><br>Since this option constitutes a full installation of SQL Server on a dedicated Azure-hosted virtual machine, there are no code or feature incompatibilities to be concerned about. Azure SQL VMs achieve complete feature parity with on-premises SQL environments. With this option, the primary difference between the Azure implementation and an on-premises installation is the management of the underlying server.</p>



<h3 class="wp-block-heading">PaaS Options</h3>



<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview?view=azuresql">Azure SQL Managed Instance</a><br>SQL MI boasts “near 100% compatibility” with the latest Enterprise Edition of the SQL Server database engine, while still including automated backups, patching, and high availability of the SQL environment. This option uses a single-tenant database engine intended to enable the least disruptive migration from an on-premises or Azure-hosted SQL Server instance to a full PaaS environment. This means that many of the incompatibilities that exist with Azure SQL Database are minimized or eliminated with Azure SQL MI. However, functionality that requires access to the file system or OS is still impacted.</p>



<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview?view=azuresql">Azure SQL Database</a><br>Azure SQL DB is another fully managed PaaS option consisting of a multi-tenant database engine that is optimized for cloud-native applications. While this option is generally less expensive than Azure SQL MI, there is less overlap with SQL Server and greater potential for code or data flow incompatibilities.</p>



<h2 class="wp-block-heading">10 Common Incompatibilities</h2>



<p>There are several very helpful tools (see the &#8220;Resources and tools&#8221; section below) that can help you identify data flow issues prior to migration. As you go through your analysis, keep in mind that Microsoft has established workarounds for a good number of these incompatibilities, so their presence in your code does not necessarily mean they are a barrier to migration.</p>



<p>For a high-level overview, here are some of the most common sticking points we see for migrating SQL code. The differences between SQL MI and SQL DB are included where applicable, as well as some potential workarounds.</p>



<h3 class="wp-block-heading" id="h-1-uses-linked-servers">1. Uses linked servers</h3>



<p>Linked servers can be used in <a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server?view=azuresql#linked-servers">SQL MI</a> to access SQL Server and Azure SQL Databases without distributed transactions. SQL DB requires the use of <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-query-horizontal-partitioning?view=azuresql">elastic queries</a> instead.</p>



<h3 class="wp-block-heading" id="h-2-performs-cross-database-queries-or-transactions">2. Performs cross-database queries or transactions</h3>



<p>These are supported with SQL MI, but not with SQL DB. In SQL DB, cross-database queries may be able to be converted to <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-query-horizontal-partitioning?view=azuresql">elastic queries</a>.</p>



<h3 class="wp-block-heading" id="h-3-uses-database-mail">3. Uses Database Mail</h3>



<p>This is available for SQL MI but not SQL DB. There are <a href="https://www.mssqltips.com/sqlservertip/7049/send-emails-azure-sql-database-azure-logic-apps/">workarounds</a> available for sending email in the Azure platform, but they will require some re-engineering.</p>



<h3 class="wp-block-heading" id="h-4-uses-system-tables-views-functions-or-stored-procedures">4. Uses system tables, views, functions, or stored procedures</h3>



<p>Some system objects are available in both SQL MI and SQL DB but not all. Consult Microsoft&#8217;s <a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server?view=azuresql">documentation</a> for a full comparison of what is available.</p>



<p>One important note is that the amount of space available to <strong>tempdb </strong>is provisioned in both SQL MI and SQL DB based on the number of cores available and the service tier licensed. Consult the documentation of each for details.</p>



<h3 class="wp-block-heading" id="h-5-accesses-windows-command-line-or-file-system">5. Accesses Windows command line or file system</h3>



<p>Neither SQL MI nor SQL DB supports direct access to the file system or the Windows command line. </p>



<p>One workaround is to migrate files to <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction">Azure Blob Storage</a> or <a href="https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction">Azure Files</a>. For SQL MI, with the appropriate security and firewall configurations, it is also possible to <a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/point-to-site-p2s-configure?view=azuresql">establish connectivity</a> between your Managed Instance&#8217;s VNet and the location of an on-premises file share.</p>



<p>SQL MI also supports SSISDB configuration and the <a href="https://learn.microsoft.com/en-us/azure/data-factory/how-to-invoke-ssis-package-managed-instance-agent">Integration Services Catalog</a>, allowing SSIS packages to be used for file manipulation. Azure Data Factory can also be leveraged to load and transform files for both SQL MI and SQL DB. An <a href="https://learn.microsoft.com/en-us/azure/data-factory/create-azure-ssis-integration-runtime">Azure-SSIS Integration Runtime (IR)</a> can be installed and configured, and SSIS packages can be run directly from <a href="https://www.mssqltips.com/sqlservertip/6025/using-files-stored-in-azure-file-services-with-integration-services-part-1/">Azure Data Factory</a>.</p>



<h3 class="wp-block-heading" id="h-6-uses-change-data-capture-cdc">6. Uses change data capture (CDC)</h3>



<p>Change data capture is supported for SQL MI. It is also supported for <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql">SQL DB</a>, but only in the S3 service tier and above.</p>



<h3 class="wp-block-heading" id="h-7-uses-bulk-insert-or-openrowset">7. Uses BULK INSERT or OPENROWSET</h3>



<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server?view=azuresql#bulk-insert--openrowset">BULK INSERT and OPENROWSET</a> are only supported from a supported Azure file source (e.g.: Azure Blob Storage or Azure Files).</p>



<h3 class="wp-block-heading" id="h-8-uses-net-framework-common-language-runtime-clr">8. Uses .NET Framework: common language runtime (CLR)</h3>



<p>CLR support is not available for SQL DB, but it is available in SQL MI with some <a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server?view=azuresql#clr">important differences</a>.</p>



<h3 class="wp-block-heading" id="h-9-sql-server-agent">9. SQL Server Agent</h3>



<p>SQL Server Agent is not available in SQL DB, and <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-jobs-overview?view=azuresql">elastic jobs</a> should be used instead. In SQL MI, SQL Server Agent is supported with <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-jobs-overview?view=azuresql">important differences</a>.</p>



<h3 class="wp-block-heading" id="h-10-uses-semantic-search">10. Uses semantic search</h3>



<p>Full-text semantic search is not available in either SQL MI or SQL DB.</p>



<h2 class="wp-block-heading" id="h-other-important-things-to-remember-when-migrating-to-a-paas-environment">Other important things to remember when migrating to a PaaS environment</h2>



<ul class="wp-block-list">
<li><strong>High availability</strong>: Since high availability is included in the PaaS offerings, SQL Server functionality and syntax connected with Always On Availability Groups is not supported.</li>



<li><strong>Maintenance</strong>: Updates, patches, backups, and restores are likewise managed automatically in the PaaS offerings. Therefore, associated T-SQL syntax will not work in SQL DB and will be different for SQL MI.</li>



<li><strong>Credential management</strong>: Windows authentication is not supported in SQL DB, and is replaced by <a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/winauth-azuread-overview?view=azuresql">Windows Authentication for Microsoft Entra</a> in SQL MI.</li>



<li><strong>Collation</strong>: Catalog collation is set when an instance (SQL MI) or a database (SQL DB) is created, and it cannot be changed afterwards.</li>
</ul>



<h2 class="wp-block-heading" id="h-resources-and-tools">Resources and tools</h2>



<p>I hope this provided you with a jump-start for thinking about whether your SQL Server code is Azure-ready.</p>



<p>Here are a few more resources and tools that can help you take the next steps toward a full compatibility analysis:</p>



<ul class="wp-block-list">
<li><a href="https://learn.microsoft.com/en-us/sql/dma/dma-overview?view=sql-server-ver16">Data Migration Assistant</a> – Microsoft’s robust tool for enabling database compatibility assessments, recommendations, and migration assistance.</li>



<li><a href="https://learn.microsoft.com/en-us/azure/migrate/migrate-services-overview">Azure Migrate</a> – this service can be used as a start-to-finish hub for planning and facilitating a cloud migration.</li>



<li><em><a href="https://www.amazon.com/Pro-Database-Migration-Azure-Modernization/dp/1484282299">Pro Database Migration to Azure</a> </em>– An excellent and comprehensive book covering the best practices for successful on-premises migrations to the Azure cloud platform.</li>
</ul>



<h2 class="wp-block-heading" id="h-want-to-learn-more">Want to learn more?</h2>



<p>Want to learn more about how The SERO Group helps organizations prepare for a SQL Server cloud migration? <a href="https://theserogroup.com/contact-us/">Schedule a call</a> and let&#8217;s talk.</p>
<p>The post <a href="https://theserogroup.com/azure/azure-sql-migration-code/">Is your SQL Server Code Ready for Azure?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6078</post-id>	</item>
		<item>
		<title>Data Governance in Action: 4 Challenges for Small and Mid-Sized Businesses (SMBs)</title>
		<link>https://theserogroup.com/data-strategy/data-governance-challenges-small-mid-sized-businesses/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Wed, 08 May 2024 21:30:00 +0000</pubDate>
				<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Events]]></category>
		<category><![CDATA[SQL Security]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5905</guid>

					<description><![CDATA[<p>With the explosion of AI and ever-increasing awareness of the importance of data, the term “data governance” seems to be everywhere these days. We hear it used in different contexts, and it may seem to equate to regulatory compliance or to have relevance for only the largest companies. This interpretation misses a key benefit of&#8230; <br /> <a class="read-more" href="https://theserogroup.com/data-strategy/data-governance-challenges-small-mid-sized-businesses/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/data-strategy/data-governance-challenges-small-mid-sized-businesses/">Data Governance in Action: 4 Challenges for Small and Mid-Sized Businesses (SMBs)</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>With the explosion of AI and ever-increasing awareness of the importance of data, the term “data governance” seems to be everywhere these days. We hear it used in different contexts, and it may seem to equate to regulatory compliance or to have relevance for only the largest companies.</p>



<p>This interpretation misses a key benefit of data governance, however: <strong>competitive advantage</strong>. We cannot diminish the critical importance of regulatory compliance and its role in shaping governance efforts, but it&#8217;s important to understand that this is not the whole picture when it comes to data governance.</p>



<h2 class="wp-block-heading" id="h-governing-data-as-an-asset">Governing Data as an Asset</h2>



<p>Data is a business asset, not unlike cash. If you asked any business leader whether their cash flow processes needed controls to ensure their accuracy, integrity, and protection, they would likely give you a quizzical look and assume you were baiting them into some trap…<em>of course their cash flow processes need controls.</em></p>



<p>Similarly, data has the potential to guide and inform nearly every aspect of a business. To leverage data effectively, businesses must ensure its integrity in an ongoing and reliable way. From financial reporting to performance evaluation to operational efficiency, data quality sits at the heart of modern businesses. Later, we will see some examples of how data quality and integrity issues can have significant impacts on SMBs.</p>



<p>Additionally, company data assets need to be <strong>secure</strong> and <strong>protected</strong>—arguably even more so than other company assets. This is because many personal data elements <em>do not ultimately belong to a business</em>. These considerations impact regulatory compliance and are essential for effectively and responsibly leveraging data as the valuable asset that it is.</p>



<h3 class="wp-block-heading" id="h-the-policies-and-processes-that-ensure-the-availability-usability-integrity-and-security-of-data-assets-are-what-we-mean-by-data-governance"><strong>The policies and processes that ensure the availability, usability, integrity, and security of data assets are what we mean by<em> data governance</em>.</strong></h3>



<p>Did you know that…</p>



<ul class="wp-block-list">
<li><strong>97%</strong> of data leaders state that their companies have experienced the costs of disregarding data quality and integrity in the form of lost revenue opportunities, inaccurate performance forecasting, and/or poor investments.</li>



<li>More than <strong>87%</strong> of small and mid-sized businesses (SMBs) collect or process sensitive customer data that could be compromised.</li>



<li>Small businesses spend an average of <strong>$955,429</strong> to restore normal business operations in the wake of successful attacks.</li>
</ul>



<p>AND…</p>



<ul class="wp-block-list">
<li>About <strong>85% </strong>of data science and analytics projects <strong>fail </strong>due in part to disregarding data governance processes.</li>
</ul>



<h1 class="wp-block-heading" id="h-four-common-challenges-for-smbs">Four Common Challenges for SMBs</h1>



<h2 class="wp-block-heading" id="h-challenge-1-no-buy-in">Challenge #1: No Buy-In</h2>



<p>To effect change within an organization, you must secure buy-in from key constituents. For data governance, these important players begin with executive leadership and extend through the organizational hierarchy to front-line staff. However, some SMBs question whether data governance has any real applicability for them. In response, consider: Do you store sensitive customer, client, or patient data? Do you keep sales, productivity, performance, or program data for decision-making? If you answered yes to any of those questions, data governance principles do have applicability in your environment.</p>



<p><strong>Business leaders</strong> and <strong>decision-makers</strong> should prioritize data governance because their decisions directly impact the company’s value and profitability. Without data governance policies, decision-makers can only hope that their data-driven decisions are well-founded. Inaccurate data can lead to poor decisions in ways that are difficult to detect or explain even after the fact, leaving decision-makers accountable for the outcome.</p>



<p><strong>Securing employee buy-in</strong> is equally important when implementing data governance. Buy-in should stem from an understanding of the shared responsibility for the data that drives business outcomes. A collective sense of accountability for data quality works to prevent resentment and misunderstandings towards your policies that could undermine even the best data governance efforts.</p>



<p><strong>Tips for Success:</strong></p>



<ul class="wp-block-list">
<li><strong>Elicit interest and assistance from vested stakeholders</strong>. Leverage opportunities to ease business pain points through data governance. I&#8217;ll list some examples in &#8220;Data Governance in the Wild&#8221; below.</li>



<li><strong>Pave a <strong>step-by-step</strong></strong> <strong>path to mature your data culture</strong>. Start with education, delegated responsibility, and distributed data ownership.</li>



<li><strong>Align policies</strong> <strong>with the existing strategic objectives of the company</strong>. Be specific about the expected business outcomes of your efforts.</li>



<li><strong>Empower business stakeholders</strong> to take ownership of and make decisions about business data.</li>



<li><strong>Create strategic goals and an initial roadmap</strong><em>. </em>This will work best if it is<em> an ongoing organizational effort</em> <em>that is largely led by the business</em>, not by IT.<strong>&nbsp;&nbsp;&nbsp;&nbsp;</strong></li>
</ul>



<h2 class="wp-block-heading" id="h-challenge-2-no-budget">Challenge #2: No Budget</h2>



<p>Chances are that unless improving data governance is already a top business priority, perhaps because of a recent security incident, audit, or a particularly problematic data irregularity, you will be up against budget constraints when introducing formalized data governance to an SMB.</p>



<p>This is a challenge, but not a barrier. Yes, there are tools on the market that can be very helpful, but they are by no means necessary for SMBs—and certainly not at the outset.</p>



<h3 class="wp-block-heading" id="h-tips-for-success"><strong>Tips for Success:</strong></h3>



<ul class="wp-block-list">
<li><strong>Start small and go slow.</strong> Introduce change gradually by beginning with what you can accomplish internally. Define and document business logic for your most critical data objects, and then introduce business processes to enforce that logic.</li>



<li><strong>Treat data governance as a process of continuous improvement</strong> rather than a costly one-time project. Small, inexpensive wins can produce significant results.</li>



<li><strong>Avoid a box-checking mentality.</strong> Use internal data audits to showcase regulatory compliance, but don&#8217;t let minimal standards dictate strategy. Prioritize business advantage over compliance, focusing first on areas with immediate value, such as labor cost savings, enhanced reporting, or boosted analytical capabilities.</li>



<li><strong>Leverage tools strategically.</strong> Develop your strategy and framework first. Look to tools second, and only for ease of implementation and execution. Tools alone will not provide true data governance.</li>
</ul>



<h2 class="wp-block-heading" id="h-challenge-3-no-champion">Challenge #3: No Champion</h2>



<p>You know that data governance is important for your organization and that someone needs to advocate for it. You also might not have the authority, capacity, or desire to be that person. In that case, use the suggestions about securing buy-in from Challenge #1 to find or create a data governance champion within your organization.</p>



<h3 class="wp-block-heading" id="h-tips-for-success-0"><strong>Tips for Success:</strong></h3>



<ul class="wp-block-list">
<li><strong>Business and IT need to collaborate for effective data governance. </strong>There should be a designated senior IT team leader who is collaborating closely with at least one senior business leader. In a small organization, this could simply mean your one IT resource working with the business owner. Regardless of the size of the organization, the effort should not be one-sided.</li>



<li><strong>Champions should not limit their scope to policies alone.</strong> Having centralized documentation, tools, and processes at the ready will help to avoid individuals and teams developing their own tools and processes to implement the policies their own way. Disparate processes can lead to inconsistency and diminished success.</li>



<li><strong>Champions need to be committed and persistent. </strong>Don&#8217;t roll out data governance processes and forget about them. Remember that education and training will need to be ongoing, as will the refinement of governance practices to keep pace with technology and changing business requirements.</li>



<li><strong>Champions should consider the needs of the employees that work with the data</strong> <strong>when creating policies and approving tools</strong>. If your governance initiatives are seen as an unnecessary burden, workarounds will be developed that could compromise your efforts.</li>
</ul>



<h2 class="wp-block-heading" id="h-challenge-4-internal-skills-gap">Challenge #4: Internal Skills Gap</h2>



<p>You have the needed organizational buy-in, a small budget, and even a champion, but your IT team is small (or outsourced), without any data professionals on staff. Where can you begin?</p>



<p>Remember, the key is to go slowly and implement organically. The full implementation of data governance will ultimately require technology expertise, but defining rules, requirements, and workflows may not. Start there, then reach out for help with technical implementation when you&#8217;re ready.</p>



<h3 class="wp-block-heading" id="h-tips-for-success-1"><strong>Tips for Success:</strong></h3>



<ul class="wp-block-list">
<li><strong>Research any compliance regulations applicable to your business sector</strong>. Perform an analysis of how your business is meeting these requirements. Where you find gaps, assess a strategy for how to fill those gaps. Implementation can be left to a third party as needed, but be sure that you understand what needs to be done. If you are unsure about how to get started, use some of that budget to reach out for support.</li>



<li><strong>Perform a security audit of your data systems.</strong> Adjust and delete credentials and permissions as appropriate. Review third-party system access. Inventory inbound, outbound, and internal data pipelines and evaluate for best practices. Then set up ongoing processes and/or reports to assist with security monitoring.</li>



<li><strong>Look for critical data objects that need to be formally defined.</strong> Beginning with mission-critical data elements, document all business logic and metadata requirements associated with these elements in a data dictionary or catalog. This doesn&#8217;t need to be elaborate, but it should cover all key data elements. Start in one business area and expand out from there.</li>



<li><strong>Assign ownership and accountability</strong>. Look for data owners who are invested in the quality of the data within their domain because of their connection to the processes that produce or consume the data.</li>



<li><strong>Upskill and/or seek third-party guidance as required</strong>. Provide access to resources for interested staff to get trained to become data owners, stewards, and partners in data governance. Reach out to third parties for guidance in areas that are outside the internal expertise of the organization.</li>
</ul>



<h2 class="wp-block-heading" id="h-data-governance-challenges-in-the-wild">Data Governance Challenges in the Wild</h2>



<p>Below are some real challenges faced by SMBs that could be solved with improved data governance processes.</p>



<h3 class="wp-block-heading" id="h-local-restaurant-chain">Local Restaurant Chain</h3>


<p>A small, local restaurant chain encountered a discrepancy between a company report on drink sales and individual location reports generated in Excel. This irregularity surfaced after a competition to incentivize staff to sell more drinks by holding a competition between locations with a cash prize. At the end of the competition, when managers compared their individual reports, Store A had sold the most drinks. When using the central company report, Store B had. Which was correct?</p>
<p>Upon investigation, one of the restaurant locations had long ago coded milkshakes as drinks. All of the other locations coded them as desserts. This was never audited or corrected. As a result, the individual reports that were rolled up by individual menu item codes produced a different result than the central report that was rolled up by the “Drink” category. This confusion affected competition results as well as drink and dessert sales analytics and other location-to-location comparisons.</p>
<p>


</p>
<h3 id="h-small-food-manufacturer" class="wp-block-heading">Small Food Manufacturer</h3>
<p>


</p>
<p>


</p>
<p><span style="font-size: revert; color: initial;">A large grocery store chain approached a regional manufacturer of salad dressings and condiments, requesting the production of one of its products as a generic for the chain. Similarly, a bulk food retailer asked the same manufacturer to produce a bulk version of the same product. The contracts were accepted, but the original product, the generic version, and the bulk version of the product are all assigned different product codes without being rolled up to any parent product category. When the accounting office ran their standard report of overall sales trends by product, they had no way of recognizing that these three items represented the same underlying product</span> and ended up producing a skewed and incomplete product analysis.</p>
<p>


</p>
<p>


</p>
<h3 id="h-international-boutique-wholesaler" class="wp-block-heading">International Boutique Wholesaler</h3>
<p>


</p>
<p>


</p>
<p>A private international wholesaler with dispersed brick-and-mortar locations was having continual problems with their inventory. Customers complained that the website frequently showed products as available when they were not. Retail associates complained that the inventory in the POS was incorrect and frequently showed negative inventory when stock was actually present in the stores.</p>
<p>


</p>
<p>


</p>
<p>One of the issues in this scenario was that individual store inventory was manually counted and entered with different workflows, lacking a centralized mechanism for tracking inventory. Furthermore, associates often used the “miscellaneous” category to sell products that were at the stores but didn&#8217;t have a corresponding inventory item recognized in the system. These issues produced a myriad of operational, financial, and analytic discrepancies, as well as increased costs in the form of labor inefficiencies, lost revenue, and misplaced or stolen inventory.</p>
<p>


</p>
<p>


</p>
<h3 id="h-social-services-organization" class="wp-block-heading">Social Services Organization</h3>
<p>


</p>
<p>


</p>
<p>A social services organization gathers information about its clients to produce an annual statistical analysis of clients and outcomes. This nationally distributed, highly regarded report serves as a source for scholarly research and influences funding for numerous public and private institutions.</p>
<p>


</p>
<p><p>However, the organization faces challenges in bringing consistency to its intake processes. Since many clients arrive in crisis, intake steps that can be skipped by staff often are because intake personnel are in a rush and do not realize the importance of each step. Furthermore, the application being used to collect this valuable information does not require answers to many questions that are deemed critical for this organization and the annual report. The organization has attempted many process changes to improve the consistency of this workflow. However, the problem still persists and is consistently cited as a limitation of their statistical findings.</p>
</p>


</p>
<h2 id="h-is-data-governance-important-for-smbs" class="wp-block-heading">Is data governance important for SMBs?</h2>
<p>


</p>
<p>


</p>
<p>This is just a small handful of examples of how a lack of data governance can affect SMBs. In fact, once you&#8217;re on the lookout for data governance issues, you may start to notice them everywhere.</p>
<p>


</p>
<p>


</p>
<p>Additionally, <strong>we have only begun to touch on the security and compliance aspects of data governance</strong>. Many of the challenges to implementation outlined here are less prominent when a regulatory requirement is at play since compliance is mandated. Even when regulations play a less obvious role, most of us are aware of the gravity of security risks. <strong>It is important to note, however, that this security risk is even greater for SMBs than for the largest companies. </strong>According to <a href="https://www.veeam.com/blog/small-business-ransomware.html#:~:text=It's%20a%20common%20misconception%20that,ransomware%20attacks%20targeted%20small%20businesses.">Veeam’s 2023 Data Protection Trends Report</a>, 85% of ransomware attacks targeted small businesses.</p>
<p>


</p>
<p>


</p>
<p>The take-away? If your business or organization retains data, and very few (if any) businesses do not, formalizing some level of data governance will help you to securely and responsibly leverage that asset to drive your business.</p>
<p>


</p>
<p>


</p>
<h3 id="h-further-reading" class="wp-block-heading">Further Reading</h3>
<p>


</p>
<p>


</p>
<ul class="wp-block-list">
<li style="list-style-type: none;">
<ul>

<li><a href="https://www.fundera.com/resources/small-business-cyber-security-statistics">More interesting statistics</a></li>
</ul>
</li>
</ul>
<p>

</p>
<p>

</p>
<ul>
<li style="list-style-type: none;">
<ul>
<li><a href="https://cyber.harvard.edu/ecommerce/privacyaudit.html">A privacy audit checklist</a></li>
</ul>
</li>
</ul>
<p>

</p>
<p>

</p>
<ul>
<li style="list-style-type: none;">
<ul>
<li><a href="https://fundcount.com/a-guide-to-data-governance-for-small-businesses/#:~:text=In%20essence%2C%20data%20governance%20is,like%20organizing%20your%20home%20office.">A guide to data governance for small businesses</a></li>
</ul>
</li>
</ul>
<p>

</p>
<p>

</p>
<ul>
<li style="list-style-type: none;">
<ul>
<li><a href="https://www.cio.com/article/202183/what-is-data-governance-a-best-practices-framework-for-managing-data-assets.html">Data governance frameworks</a></li>
</ul>
</li>
</ul>
<p>

</p>
<p>

</p>
<ul>
<li style="list-style-type: none;">
<ul>
<li><a href="https://aws.amazon.com/what-is/data-governance/">Amazon’s guide to data governance for SMBs</a></li>
</ul>
</li>
</ul>
<p>

</p>
<p>


</p>
<p>


</p>
<h2 id="h-want-to-continue-the-conversation-about-data-governance-for-smbs" class="wp-block-heading">Want to continue the conversation about data governance for SMBs?</h2>
<p>



<p>To learn more, you can <a href="https://www.youtube.com/watch?v=dgfGUkEeAe0">click here</a> to access a recording of our May 14, 2024 webinar on Data Governance for SMBs.</p>



<p>Need help? We can help you tailor practical data governance solutions to meet the specific needs of your SMB. <a href="https://theserogroup.com/#contact" target="_blank" rel="noreferrer noopener">Schedule a call</a> or <a href="mailto:joew@theserogroup.com" target="_blank" rel="noreferrer noopener">send us an email</a>. </p>
<p>The post <a href="https://theserogroup.com/data-strategy/data-governance-challenges-small-mid-sized-businesses/">Data Governance in Action: 4 Challenges for Small and Mid-Sized Businesses (SMBs)</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5905</post-id>	</item>
		<item>
		<title>Log-Wrangling 101: 7 Tips for Managing Your SQL Server Transaction Logs</title>
		<link>https://theserogroup.com/dba/7-tips-for-managing-your-sql-server-transaction-logs/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Tue, 09 Apr 2024 13:14:05 +0000</pubDate>
				<category><![CDATA[DBA]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[Database Development]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Assessment]]></category>
		<category><![CDATA[SQL Audit]]></category>
		<category><![CDATA[SQL Consultant]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5825</guid>

					<description><![CDATA[<p>In my last post, I broke down the parts of the anatomy of the SQL Server transaction log. In this post, I will share a few tips for keeping your transaction logs well-maintained and your SQL Server databases happy and healthy. Here are 7 important tips for managing your transaction logs: Now, let’s break these&#8230; <br /> <a class="read-more" href="https://theserogroup.com/dba/7-tips-for-managing-your-sql-server-transaction-logs/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/dba/7-tips-for-managing-your-sql-server-transaction-logs/">Log-Wrangling 101: 7 Tips for Managing Your SQL Server Transaction Logs</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In my last post, I broke down the parts of the <a href="https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/#:~:text=Within%20SQL%20Server's%20transaction%20logs,may%20have%20numerous%20log%20records.">anatomy of the SQL Server transaction log</a>. In this post, I will share a few tips for keeping your transaction logs well-maintained and your SQL Server databases happy and healthy.</p>



<p>Here are 7 important tips for managing your transaction logs:</p>



<ol class="wp-block-list">
<li><strong>Select the optimal recovery model and backup strategy for your database.</strong></li>



<li><strong>Configure Auto-Growth setting for optimal performance.</strong></li>



<li><strong>Configure the Max File Size of the transaction log.</strong></li>



<li><strong>Store log files on a separate drive from data files.</strong></li>



<li><strong>Minimize the use of log shrinking.</strong></li>



<li><strong>Use only one log file.</strong></li>



<li><strong>Monitor log file size and growth regularly.</strong></li>
</ol>



<p>Now, let’s break these down one at time.</p>



<h2 class="wp-block-heading" id="h-recovery-model-and-backups">Recovery Model and Backups</h2>



<p>First, the #1 factor to consider in managing SQL Server transaction logs is the recovery model settings of your databases. Additionally, you must consider the correlated backup processes required for recovery for each model.</p>



<p>In a <a href="https://theserogroup.com/sql-server/sql-server-recovery-models/" target="_blank" rel="noreferrer noopener">recent post</a>, we explained that if a database is in <strong>Simple Recovery</strong> mode, SQL Server will take full or differential backups as scheduled. In this mode, however, the database can only be restored to the point of the last backup. Transaction logs are flushed automatically by SQL Server as data is committed, so there is no log maintenance or transaction log backup required. This is a good strategy for non-critical databases where your recovery point objective can be the point of the last full or differential backup.</p>



<p>By contrast, databases in <strong>Full </strong>or<strong> Bulk-Logged Recovery</strong> mode rely on the DBA to maintain the logs. Administrators must ensure that they are truncated frequently enough by a transaction log backup to prevent them from filling up. A common practice used by many DBAs is to use <a href="https://ola.hallengren.com/" target="_blank" rel="noreferrer noopener">Ola Hallengren’s scripts</a> for establishing the backup chain. Additionally, a common cadence is to perform weekly full backups, daily differential backups, and transaction log backups every 15 minutes.</p>



<h2 class="wp-block-heading" id="h-auto-growth-setting">Auto-Growth Setting</h2>



<p>The Auto-Growth setting tells SQL Server how much to grow the log file each time that it needs to expand. The size of this growth should not be too small, since log file growth operations are slow and can impact query performance. This impact is due to the fact that log files have been historically unable to make use of the instant file initialization (IFI) option.</p>



<p>Notably, there have been some improvements in this area if you are on SQL Server 2022. Specifically, fewer VLFs are created internally by SQL Server logging processes. Additionally, the transaction log <em>can</em> make use of IFI if it is enabled under the normal requirements and auto-growth is not set to more than 64 MB (the default setting for 2022). <a href="https://www.youtube.com/watch?v=KqBtwF991yQ" target="_blank" rel="noreferrer noopener"><em>(More below</em></a>*<a href="https://www.youtube.com/watch?v=KqBtwF991yQ"><em>)</em></a></p>



<p>For earlier editions, however, transaction logs cannot use IFI. Therefore, using a larger auto-growth setting of 256 MB to 1024 MB (or more) can be beneficial for performance. </p>



<p>Additionally, setting the initial size of the log file to 20-30% of the initial data file size is a good practice. Pay particular attention to the initial size for the tempdb log file. Tempdb will always be shrunk to its initial size after a restart and will have to re-grow to a standard size. So, you do not want this setting to be unreasonably small to start.</p>



<h3 class="wp-block-heading"><strong>How to Adjust in SSMS</strong></h3>



<p>Right click on the database and navigate to the properties tab. Click on<strong> Files</strong> in the list on the left.</p>



<figure class="wp-block-image size-full"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-2.png"><img fetchpriority="high" decoding="async" width="975" height="323" src="https://theserogroup.com/wp-content/uploads/2024/03/image-2.png" alt="" class="wp-image-5827" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-2.png 975w, https://theserogroup.com/wp-content/uploads/2024/03/image-2-300x99.png 300w, https://theserogroup.com/wp-content/uploads/2024/03/image-2-768x254.png 768w" sizes="(max-width: 975px) 100vw, 975px" /></a></figure>



<p>Then click on the <strong>ellipsis </strong>next to the log file,</p>



<figure class="wp-block-image size-full"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-3.png"><img decoding="async" width="458" height="339" src="https://theserogroup.com/wp-content/uploads/2024/03/image-3.png" alt="" class="wp-image-5828" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-3.png 458w, https://theserogroup.com/wp-content/uploads/2024/03/image-3-300x222.png 300w" sizes="(max-width: 458px) 100vw, 458px" /></a></figure>



<p>and change the <strong>File Growth</strong> to the appropriate value.</p>



<p>You can also run the below query to update the setting:</p>



<p><code>ALTER DATABASE [YourDB] MODIFY FILE (name = 'YourDB', FILEGROWTH=256MB);</code></p>



<h2 class="wp-block-heading" id="h-max-file-size-setting">Max File Size Setting</h2>



<p>Configuring the Max File Size setting is not necessary for all databases. However, doing so can be useful for databases where there is a reasonable likelihood that there could be a runaway query or process that could cause the log to grow to the point that it consumes all space on the server and causes SQL Server to fail.</p>



<p>If this setting is in place in such an event, SQL Server will <a id="Tip4"></a>be able to read but unable to commit transactions and will throw errors. It will not, however, fill the entire server and cause SQL Server to fail.</p>



<h3 class="wp-block-heading"><strong>How to Adjust in SSMS</strong></h3>



<p>Follow the same directions as above, and in the<strong> Autogrowth</strong> window, change the <strong>Maximum File Size</strong>:</p>



<figure class="wp-block-image size-full"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-4.png"><img decoding="async" width="446" height="320" src="https://theserogroup.com/wp-content/uploads/2024/03/image-4.png" alt="" class="wp-image-5829" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-4.png 446w, https://theserogroup.com/wp-content/uploads/2024/03/image-4-300x215.png 300w" sizes="(max-width: 446px) 100vw, 446px" /></a></figure>



<p>Or use this query:</p>



<p><code>ALTER DATABASE [YourDB] MODIFY FILE (name = 'YourDB_logfilename', SIZE=64000MB);</code></p>



<h2 class="wp-block-heading" id="h-log-storage">Log Storage</h2>



<p>Best practice is to isolate your log files on a separate physical drive from your data files for optimal performance. This is to prevent the intensive sequential write workload required for the logs from interfering with the random write workload used for the data files.</p>



<h2 class="wp-block-heading" id="h-log-shrinking">Log Shrinking</h2>



<p>Occasionally, we will see databases with log shrinking employed as a part of routine maintenance or the Auto-Shrink feature turned on. Both should be avoided.</p>



<p>Log shrinking should be reserved for occasions when there is a large amount of unused log space or when bulk operations may have caused the log to grow more than it ordinarily would. <em>(Note that databases that see many bulk operations may benefit from the </em><a href="https://theserogroup.com/sql-server/sql-server-recovery-models/" target="_blank" rel="noreferrer noopener"><em>Bulk-Logged recovery model</em></a><em> setting, which suspends logging during qualifying bulk operations.)</em></p>



<p>Using Auto-Shrink should only be done with caution and full awareness of the implications, since it can lead to significant performance issues. Microsoft’s <a href="https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/database-file-operations/considerations-autogrow-autoshrink" target="_blank" rel="noreferrer noopener">documentation</a> does a good job of outlining the feature and its implications<a id="Tip6"></a>.</p>



<h3 class="wp-block-heading"><strong>How to Identify and Shrink Appropriate Logs in SSMS</strong></h3>



<p>Obtain information about log space consumption using:<br><code>DBCC SQLPERF(logspace)</code></p>



<p>If necessary, files with an unusually large percentage of unused log space can be shrunk using:</p>



<p><code>DBCC shrinkfile(YourDB_log, 1024) -- shrink log to 1 GB</code></p>



<p>The appropriate size to which to shrink the log will depend on the database. Shrinking to an average log size makes sense. <a href="https://www.mssqltips.com/sqlservertip/1178/monitoring-sql-server-database-transaction-log-space/"><em>(See below for information about obtaining this.**)</em></a></p>



<h2 class="wp-block-heading" id="h-use-one-log-file">Use One Log File</h2>



<p>Only set up multiple log files for a database in unique situations where space is limited (like in the case of a full log drive). If this is required, you should remedy this situation as soon as possible.</p>



<p>Understand that setting up multiple log files does<em> not</em> allow SQL Server to write to them in parallel and does <em>not </em>improve performance. In fact, having multiple large log files will significantly slow down your recovery if you need to restore your database in a DR scenario.</p>



<h2 class="wp-block-heading" id="h-monitor-regularly">Monitor Regularly</h2>



<p>Once you have your databases configured according to best practices and your maintenance processes established, your transaction logs could easily operate without issue for years!</p>



<p>It is still good practice, however, to monitor the log file size, growth, and performance for issues before they have a chance to escalate to crisis events.</p>



<p>There are multiple tools and services on the market that can assist with this, including our own <a href="https://theserogroup.com/seroshield/" target="_blank" rel="noreferrer noopener">SEROShield</a>, but you can also monitor with reports that provide insight through queries like the following:</p>



<h3 class="wp-block-heading"><strong>Monitoring Log Usage</strong></h3>



<p><code>DBCC SQLPERF(logspace)</code></p>



<figure class="wp-block-image size-full is-resized"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-5.png"><img loading="lazy" decoding="async" width="695" height="275" src="https://theserogroup.com/wp-content/uploads/2024/03/image-5.png" alt="" class="wp-image-5830" style="width:459px;height:auto" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-5.png 695w, https://theserogroup.com/wp-content/uploads/2024/03/image-5-300x119.png 300w" sizes="auto, (max-width: 695px) 100vw, 695px" /></a></figure>



<p><code>SELECT * FROM sys.dm_db_log_space_usage</code></p>



<figure class="wp-block-image size-full"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-6.png"><img loading="lazy" decoding="async" width="975" height="70" src="https://theserogroup.com/wp-content/uploads/2024/03/image-6.png" alt="" class="wp-image-5831" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-6.png 975w, https://theserogroup.com/wp-content/uploads/2024/03/image-6-300x22.png 300w, https://theserogroup.com/wp-content/uploads/2024/03/image-6-768x55.png 768w" sizes="auto, (max-width: 975px) 100vw, 975px" /></a></figure>



<h3 class="wp-block-heading"><strong>Why a Log is Awaiting Truncation</strong></h3>



<p><code>SELECT [name] as DatabaseName, [log_reuse_wait_desc] FROM sys.databases</code></p>



<figure class="wp-block-image size-full is-resized"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-7.png"><img loading="lazy" decoding="async" width="496" height="279" src="https://theserogroup.com/wp-content/uploads/2024/03/image-7.png" alt="" class="wp-image-5832" style="width:349px;height:auto" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-7.png 496w, https://theserogroup.com/wp-content/uploads/2024/03/image-7-300x169.png 300w" sizes="auto, (max-width: 496px) 100vw, 496px" /></a></figure>



<h3 class="wp-block-heading"><strong>Monitoring Virtual Log Files (VLFs)</strong></h3>



<p><code>DBCC LOGINFO</code></p>



<figure class="wp-block-image size-full is-resized"><a href="https://theserogroup.com/wp-content/uploads/2024/03/image-8.png"><img loading="lazy" decoding="async" width="890" height="288" src="https://theserogroup.com/wp-content/uploads/2024/03/image-8.png" alt="" class="wp-image-5833" style="width:609px;height:auto" srcset="https://theserogroup.com/wp-content/uploads/2024/03/image-8.png 890w, https://theserogroup.com/wp-content/uploads/2024/03/image-8-300x97.png 300w, https://theserogroup.com/wp-content/uploads/2024/03/image-8-768x249.png 768w" sizes="auto, (max-width: 890px) 100vw, 890px" /></a></figure>



<h2 class="wp-block-heading" id="h-want-to-learn-more">Want to learn more?</h2>



<p>Here are a few more resources about some of the topics we touched on:</p>



<ul class="wp-block-list">
<li><a id="Video">*</a><a href="https://www.youtube.com/watch?v=KqBtwF991yQ" target="_blank" rel="noreferrer noopener">SQL Server 2022 Changes</a> – a very interesting demo of the new functionality</li>



<li><a href="https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-ver16#instant-file-initialization-and-the-transaction-log" target="_blank" rel="noreferrer noopener">Instant File Initialization (IFI) and the Transaction Log</a> – Microsoft documentation</li>



<li><a href="https://www.sqlshack.com/sql-server-transaction-log-part-3-configuration-best-practices/" target="_blank" rel="noreferrer noopener">More Information and Best Practices</a></li>



<li><a href="https://www.mssqltips.com/sqlservertip/2092/sql-server-transaction-log-grows-and-fills-up-drive/" target="_blank" rel="noreferrer noopener">Transaction Log Growth Triage</a> – a nice post on triaging a transaction log growth event</li>



<li><a id="LogUsage">**</a><a href="https://www.mssqltips.com/sqlservertip/1178/monitoring-sql-server-database-transaction-log-space/" target="_blank" rel="noreferrer noopener">Tracking Log Usage</a> – a process for tracking log usage over time</li>



<li><a href="https://www.sqlshack.com/sql-server-transaction-log-growth-monitoring-and-management/" target="_blank" rel="noreferrer noopener">More on Monitoring</a></li>



<li><a href="https://www.sqlshack.com/what-are-sql-virtual-log-files-aka-sql-server-vlfs/" target="_blank" rel="noreferrer noopener">Virtual Log Files (VLFs)</a></li>
</ul>



<p>Your SQL log file configuration can affect performance. It can also affect the recoverability of key databases. If you&#8217;re concerned about your log file configuration, we’re happy to help. <a href="https://theserogroup.com/#contact">Contact us</a> to set up a short call to discuss. </p>
<p>The post <a href="https://theserogroup.com/dba/7-tips-for-managing-your-sql-server-transaction-logs/">Log-Wrangling 101: 7 Tips for Managing Your SQL Server Transaction Logs</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5825</post-id>	</item>
		<item>
		<title>Anatomy of a SQL Server Transaction Log</title>
		<link>https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/</link>
					<comments>https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/#comments</comments>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Tue, 19 Mar 2024 12:27:40 +0000</pubDate>
				<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[Database Development]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Consultant]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<category><![CDATA[SQL Training]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5806</guid>

					<description><![CDATA[<p>Recently, we discussed the role of the recovery model in establishing how SQL Server manages database transaction logs. But what is the SQL Server log composed of? How does the logging process work? In this post, we will dissect the SQL Server transaction log to uncover its core anatomy. First, what is the transaction log?&#8230; <br /> <a class="read-more" href="https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/">Anatomy of a SQL Server Transaction Log</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Recently, we discussed the role of the <a href="https://theserogroup.com/sql-server/sql-server-recovery-models/" target="_blank" rel="noreferrer noopener">recovery model</a> in establishing how SQL Server manages database transaction logs. But what is the SQL Server log composed of? How does the logging process work? In this post, we will dissect the SQL Server transaction log to uncover its core anatomy.</p>



<h2 class="wp-block-heading">First, what <em>is</em> the transaction log?</h2>



<p>The SQL Server transaction log is a crucial element within each database that records all database changes across time. This log enables point-in-time recoverability and rollbacks in the event of mistakes, corruption, or failure. This recoverability is critical to ensuring the integrity and resilience of the database.</p>



<h2 class="wp-block-heading">What is contained in the transaction log?</h2>



<p>There are several key concepts and log components to be aware of when dealing with SQL Server’s transaction logs.</p>



<h3 class="wp-block-heading">1.&nbsp;&nbsp;&nbsp;&nbsp; Log Records</h3>



<p>Within SQL Server’s transaction logs, individual change operations are enclosed within <strong>log records</strong>. Each log record represents one UPDATE, INSERT, or DELETE operation in the database and is associated with one transaction. Each transaction, however, may have numerous log records.</p>



<p>Every data modification made in the database has one or more log records associated with it. Log records contain either the logical operation performed with the modification made, or they contain before and after images of the modified data. Start and end times of each transaction are also recorded.</p>



<h3 class="wp-block-heading">2.&nbsp;&nbsp;&nbsp;&nbsp; Log Blocks</h3>



<p>A <strong>log block</strong> is the basic unit of I/O when it comes to transaction logs. The sizes of the log blocks vary but are at least 512 bytes and can be as large as 60 KB. Log blocks can contain multiple log records or none. According to <a href="https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver16">Microsoft</a>, <em>“a log block is a container of log records that&#8217;s used as the basic unit of transaction logging when writing log records to disk.”</em></p>



<h3 class="wp-block-heading">3.&nbsp;&nbsp;&nbsp;&nbsp; Virtual Log Files (VLFs)</h3>



<p>When considering the database transaction log, it is important to keep in mind the concept of the physical log file and its composite VLFs, or <strong>virtual log files</strong>.</p>



<p>The division of the log into multiple log segments (or virtual logs) is handled automatically by SQL Server and is the foundation of the circular nature of the physical log file (see below). Each VLF is in turn composed of numerous log blocks.</p>



<p>VLFs are created dynamically as needed by SQL Server. While we as administrators are not able to set a static size for these file divisions, we can manage the auto-growth settings of the log file, which will in turn impact the size and number of the VLFs that SQL Server creates.</p>



<h3 class="wp-block-heading">4.&nbsp;&nbsp;&nbsp;&nbsp; Log Sequence Number (LSN)</h3>



<p>Within the database’s transaction log, the chronological consistency of transactions is preserved across the VLFs through <strong>LSNs, </strong>or log sequence numbers. These numbers allow SQL Server to arrange the transactions in their correct chronological order for the purposes of rollback and recovery despite the potential for having been written to different VLFs.</p>



<p>The LSN is a concatenation of the VLF ID, the Log Block ID, and the Log Record ID separated by colons. You can see examples of the LSN by querying the <em>sys.dm_db_log_info</em> view and looking at the [vlf_create_lsn] field:</p>



<figure class="wp-block-image size-full"><a href="https://theserogroup.com/wp-content/uploads/2024/03/TransactionLogLSN.png"><img loading="lazy" decoding="async" width="124" height="118" src="https://theserogroup.com/wp-content/uploads/2024/03/TransactionLogLSN.png" alt="Example of a SQL Server transaction log LSN." class="wp-image-5807"/></a></figure>



<h3 class="wp-block-heading">5.&nbsp;&nbsp;&nbsp;&nbsp; Minimum Recovery LSN (MinLSN)</h3>



<p>The MinLSN<strong>, </strong>or <strong>minimum recovery LSN</strong>,is the LSN that would be needed for a full recovery of the database. Put another way, the MinLSN is the first LSN in the sequence of the <em>active</em> section or <em>tail</em> of the log, which is the section of the log that has not yet been written from memory to disk by a checkpoint operation.</p>



<h2 class="wp-block-heading">How does SQL Server logging work?</h2>



<h3 class="wp-block-heading">6.&nbsp;&nbsp;&nbsp;&nbsp; Checkpoint Process</h3>



<p><strong>Checkpoints</strong> are the SQL Server operations by which database modifications that are being stored in memory are written to disk. Checkpoints are usually triggered by a log backup or automatically by the recovery interval setting of the database.</p>



<p>The active section of the log that has not yet reached a checkpoint can never be truncated, so paying attention to the recovery settings of each database helps to manage checkpoints and transaction log truncation optimally.</p>



<h3 class="wp-block-heading">7.&nbsp;&nbsp;&nbsp;&nbsp; Cyclical Logging</h3>



<p>The SQL Server transaction log can be thought of as a revolving log file.</p>



<p>Consider that the physical log is allocated a designated amount of space in the database properties when it is configured. As mentioned above, the transaction log is made up of smaller segments (VLFs) that SQL Server writes to as part of its internal logging process – adding a new VLF each time the last one fills up. This process is internally controlled by the application.</p>



<p><strong>If the physical transaction log is never truncated, it will continue to grow, adding new VLFs as it grows, until it uses up all the space available to it and SQL Server fails.</strong></p>



<p>If, however, the transaction log is truncated, older VLFs containing transactions with LSNs prior to the MinLSN will be cleared. When those files are truncated, the space that was being used by the deleted log records will be released and made available for new log records to be created as more transactions are committed. This circular process allows the overall size of the log to remain consistent despite ongoing transaction logging.</p>



<p>This log-and-truncate cycle will continue successfully unless the logging rate exceeds the truncation rate. Note once again, however, that <strong>truncation of the transaction log cannot occur in full or bulk-logged recovery models if log backups are not taken first.</strong></p>



<p>For this reason, administrators should pay careful attention to the recovery and log file settings to avoid unnecessary disruptions in the SQL Server environment.</p>



<h2 class="wp-block-heading">What else should you know about SQL Server transaction logs?</h2>



<p>In an upcoming post, we will discuss how to troubleshoot some common issues with SQL Server transaction logs and give some best practices for transaction log management.</p>



<p>When configured carefully, the logging process within SQL Server can operate seamlessly for years without DBA intervention.</p>



<p>Here are some resources that contain more information about some of the topics we touched on:</p>



<ul class="wp-block-list">
<li><a href="https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver16#log-blocks">Log Blocks</a></li>



<li><a href="https://www.sqlshack.com/sql-server-transaction-log-architecture/">LSNs</a></li>



<li><a href="https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver16#virtual-log-file-creation">Virtual Log Files and Growth Algorithms</a></li>



<li><a href="https://learn.microsoft.com/en-us/sql/relational-databases/logs/database-checkpoints-sql-server?view=sql-server-ver16">SQL Server Checkpoint Process</a></li>



<li><a href="https://www.sqlshack.com/sql-server-transaction-log-backup-truncate-and-shrink-operations/">Transaction Log Backups and Log Shrinking</a></li>
</ul>



<h2 class="wp-block-heading" id="h-want-to-work-with-the-sero-group">Want to work with The SERO Group?</h2>



<p>If you’re concerned about your backup strategy, or more to the point, your ability to restore a critical database, <a href="https://theserogroup.com/#contact">contact us</a>. We’re happy to help.</p>
<p>The post <a href="https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/">Anatomy of a SQL Server Transaction Log</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://theserogroup.com/sql-server/anatomy-of-a-sql-server-transaction-log/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5806</post-id>	</item>
		<item>
		<title>What is a SQL Server Recovery Model?</title>
		<link>https://theserogroup.com/sql-server/sql-server-recovery-models/</link>
					<comments>https://theserogroup.com/sql-server/sql-server-recovery-models/#comments</comments>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Wed, 31 Jan 2024 13:00:00 +0000</pubDate>
				<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[Database Development]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Assessment]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5696</guid>

					<description><![CDATA[<p>When we meet with clients for an initial SQL Server Health Check, we&#8217;re sometimes asked what a SQL Server recovery model is. Once explained, the natural follow up question often is: well, then which recovery model should we use? In this post, we will address both of these important questions. What is a SQL Server&#8230; <br /> <a class="read-more" href="https://theserogroup.com/sql-server/sql-server-recovery-models/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/sql-server/sql-server-recovery-models/">What is a SQL Server Recovery Model?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When we meet with clients for an initial SQL Server Health Check, we&#8217;re sometimes asked what a SQL Server recovery model is. Once explained, the natural follow up question often is: well, then which recovery model should we use?</p>



<p>In this post, we will address both of these important questions.</p>



<h2 class="wp-block-heading">What is a SQL Server Recovery Model?</h2>



<p>In SQL Server environments, the <a href="https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server?view=sql-server-ver16" target="_blank" rel="noreferrer noopener">recovery model setting</a> is a database-level configuration that defines how transaction logs and backups are managed for each database. This setting determines how the transaction log is managed and whether point-in-time recovery options are available for restoring that database.</p>



<p>There are three recovery model settings in SQL Server: Simple, Full, and Bulk-Logged. Let&#8217;s look at each.</p>



<h2 class="wp-block-heading">Simple Recovery Model</h2>



<p>With the Simple Recovery Model, SQL Server essentially relieves you from having to manage the transaction log. It takes care of it for you. SQL Server automatically truncates the transaction at regular intervals. Scheduled backups of the transaction log are not taken.</p>



<p>As such when a database is set to the Simple Recovery Model, restoring transactions since the last full or differential backup is not an option. This means that data between backups has the potential to be lost in a crisis.</p>



<p>The Simple Recovery Model is designed for situations where point-in-time recovery is not of crucial importance. </p>



<h3 class="wp-block-heading">When should you use the Simple Recovery Model?</h3>



<p>Simple recovery should be used when the risk of data loss between scheduled full or differential backups is acceptable, and straightforward backup and recovery processes are preferred. Development, test, and <em>non-critical</em> production databases are often set to simple recovery.</p>



<h3 class="wp-block-heading">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Pros</h3>



<ul class="wp-block-list">
<li>Simplified backup process</li>



<li>Straightforward recovery procedures</li>



<li>Typically requires less storage for transaction logs</li>
</ul>



<h3 class="wp-block-heading">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Cons</h3>



<ul class="wp-block-list">
<li>Recovery is limited to the time of the last full or differential backup</li>



<li>Potential data loss of any transaction since the last data backup</li>



<li>Cannot be used with log shipping, Always On Availability Groups, or Database Mirroring</li>
</ul>



<h2 class="wp-block-heading">Full Recovery Model</h2>



<p>With the Full Recovery Model, transaction logs are retained until they are explicitly backed up. They are not automatically managed by SQL Server. The logs are only truncated after a log backup has been taken. Note: this means that regular transaction log backups are not just an option but a requirement in this model to prevent the log from growing too large. If the transaction log is not explicitly backed up, it may continue to grow unbounded and fill all available disk space. (You can set a maximum size for the transaction log to prevent it from filling the disk, however once that maximum size is reached the database will not be available for transactional purposes.)</p>



<p>The Full Recovery Model enables point-in-time recovery of your critical databases. </p>



<h3 class="wp-block-heading">When should you use the Full Recovery Model?</h3>



<p>The Full Recovery Model is most appropriate for critical databases where potential data loss between backups must be minimized.</p>



<p>It is also appropriate to use this model when using log shipping, Always On Availability Groups, or Database Mirroring. Likewise, it can be used for databases with frequent transactions since the frequency of log backups can be scheduled manually as needed. Practice Management Systems in healthcare, Enterprise Resource Planning Systems in manufacturing, and financial systems in FinTech applications typically use the Full Recovery Model. </p>



<h3 class="wp-block-heading">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Pros</h3>



<ul class="wp-block-list">
<li>Fully logged to support point-in-time recovery</li>



<li>Offers the highest level of protection against data loss for mission-critical data</li>
</ul>



<h3 class="wp-block-heading">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Cons</h3>



<ul class="wp-block-list">
<li>Requires more complex backup and recovery processes, including transaction log backups</li>



<li>The transaction log can grow rapidly if not managed properly</li>
</ul>



<h2 class="wp-block-heading">Bulk-Logged Recovery Model</h2>



<p>The Bulk-Logged Recovery Model is a hybrid approach to database recovery. It is designed for databases that see frequent bulk operations, like bulk inserts or index rebuilds that would produce large transaction logs, and whose RPO (<a href="https://theserogroup.com/dba/how-to-align-your-sql-server-to-your-rpo-and-rto-goals/">recovery point objective</a>) is shorter than the time interval between full or differential backups.</p>



<p>Like the Full Recovery Model, the Bulk-Logged Recovery Model requires explicit log backups. However, unlike the Full Recovery Model, Bulk-Logged recovery uses minimal logging while bulk operations are being performed, reducing the rate at which the transaction log grows and improving performance for these operations. </p>



<p>Full transaction logging of the bulk-operation is still retained in the log backup and can be leveraged for point-in-time recovery. <em>However, transactions occurring during minimally logged bulk operations are vulnerable to loss until the operation is complete and the next log backup has been taken.</em></p>



<h3 class="wp-block-heading">When should you use the Bulk-Logged Recovery Model?</h3>



<p>The Bulk-Logged Recovery Model is most appropriate for databases (like in data warehouses or in data marts) where large-scale data operations are frequent.</p>



<h3 class="wp-block-heading">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Pros</h3>



<ul class="wp-block-list">
<li>Fully logged to support point-in-time recovery like the Full Recovery Model</li>



<li>Offers performance improvement and reduced risk of log growth during bulk operations</li>
</ul>



<h3 class="wp-block-heading">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Cons</h3>



<ul class="wp-block-list">
<li>Requires more complex backup and recovery processes, including transaction log backups</li>



<li>There is limited point-in-time recoverability <em>during</em> bulk operations, and restoring to the most recent log backup would be necessary in the event of a failure</li>
</ul>



<h2 class="wp-block-heading" id="h-more-information">More information</h2>



<p>Choosing the appropriate SQL Server Recovery Model for each of your database requires an understanding of data it contains, as well as the business&#8217; RPO for the data. For example, if the business can afford to lose up to 24 hours of data, the nightly backups and a Simple Recovery Model may be a good approach. On the other hand, if minimal data loss is acceptable for a database, the Full Recovery Model with combination full backups, differential backups, and frequent log backups will likely be the model of choice.</p>



<p>Do you have questions about SQL Server Recovery Models or your recovery strategy? Are you unsure of what recovery models your SQL databases are using?</p>



<p>Here are some additional posts that may help:</p>



<ul class="wp-block-list">
<li><a href="https://theserogroup.com/dba/how-to-align-your-sql-server-to-your-rpo-and-rto-goals/">How to Align Your SQL Server to Your RPO and RTO Goals</a></li>



<li><a href="https://theserogroup.com/data-security/where-to-start-with-disaster-recovery-in-sql-server/">Where to Start with Disaster Recovery in SQL Server</a></li>



<li><a href="https://theserogroup.com/sql-server/using-vm-snapshots-to-backup-sql-server/">Using VM Snapshots to Backup SQL Server?</a></li>



<li><a href="https://theserogroup.com/dba/why-performing-database-restores-before-a-crisis-strikes-is-a-good-idea/">Why performing database restores before a crisis strikes is a good idea</a></li>
</ul>



<p>Or reach out to us! We would be happy to hop on a call or to set up an initial health check to help you perform an assessment. <a href="https://theserogroup.com/#contact" target="_blank" rel="noreferrer noopener">Schedule a call</a>&nbsp;with us to get started.</p>
<p>The post <a href="https://theserogroup.com/sql-server/sql-server-recovery-models/">What is a SQL Server Recovery Model?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://theserogroup.com/sql-server/sql-server-recovery-models/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5696</post-id>	</item>
		<item>
		<title>Archiving and Deletion Strategy&#8230;KonMari for Data Management?</title>
		<link>https://theserogroup.com/data-strategy/archiving-and-deletion-strategy-konmari-for-data-management/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Thu, 26 Oct 2023 20:09:35 +0000</pubDate>
				<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Data Management]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[Database Development]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Events]]></category>
		<category><![CDATA[SQL Security]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5588</guid>

					<description><![CDATA[<p>Welcome to the third and final post in our series on Data Lifecycle Management (DLM), where we will talk about archiving and purging company data. In the last post, we talked about applying the Kaizen approach to data management to achieve a culture of continuous improvement on our data teams. In this post, we will&#8230; <br /> <a class="read-more" href="https://theserogroup.com/data-strategy/archiving-and-deletion-strategy-konmari-for-data-management/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/data-strategy/archiving-and-deletion-strategy-konmari-for-data-management/">Archiving and Deletion Strategy&#8230;KonMari for Data Management?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Welcome to the third and final post in our <a href="https://theserogroup.com/data-strategy/data-collection-two-key-tools-to-improve-your-data-strategy/">series</a> on Data Lifecycle Management (DLM), where we will talk about archiving and purging company data.</p>



<p>In the <a href="https://theserogroup.com/data-strategy/data-management-strategy-a-kaizen-approach/">last post</a>, we talked about applying the Kaizen approach to data management to achieve a culture of continuous improvement on our data teams.</p>



<p>In this post, we will use the <a href="https://konmari.com/about-the-konmari-method/#:~:text=The%20KonMari%20Method%E2%84%A2%20encourages,and%2C%20finally%2C%20sentimental%20items.">KonMari</a> method of simplification, recently made famous by Marie Kondo, as a lens for considering what to keep and what to purge in our business data repositories.</p>



<p>As always, we will also give some best practices for how to establish policies around archiving and purging your company data.</p>



<h2 class="wp-block-heading" id="h-data-lifecycle-management-archiving-and-deletion">Data Lifecycle Management: Archiving and Deletion</h2>



<p>As mentioned in previous posts, the final phases of Data Lifecycle Management are Archiving and Deletion.</p>



<p>These phases help to ensure that we keep and maintain only that data which is required for our business. But how do we determine what to keep active, what to archive, and what to purge?</p>



<p>We can apply some concepts from the KonMari simplification method to our data strategy here to help us decide.</p>



<h2 class="wp-block-heading" id="h-rule-1-make-the-commitment">Rule 1: Make the Commitment</h2>



<p>Kondo states that the first step in KonMari is committing to achieving your goal. This may seem like an obvious first step for any endeavor, but many companies fail to establish a data retention strategy.</p>



<p>It is not uncommon for businesses to take the approach that keeping <em>all </em>data (sometimes even in a “hot”, or readily accessible, repository) is the way to go. This tactic usually stems from either:</p>



<ol class="wp-block-list" type="A">
<li>an explicit belief that you cannot go wrong with keeping too much historical data<br><strong>or</strong></li>



<li>from having no capacity to prioritize a retention strategy.</li>
</ol>



<h3 class="wp-block-heading" id="h-either-way-the-keep-everything-strategy-is-misguided-for-3-reasons">Either way, the “keep everything” strategy is misguided for 3 reasons.</h3>



<p><strong>First, the more data you keep, the more time it will take to recover in the event of a crisis.</strong><br><br>Crisis can take the form of:</p>



<ul class="wp-block-list">
<li>a lawsuit or an audit that requires <em>retrieval of specific information</em></li>
</ul>



<p>or</p>



<ul class="wp-block-list">
<li>a natural disaster, human error, criminal activity or another event that demands <em>restoration of data to a particular point in time</em>.</li>
</ul>



<p>In any case, restoration time is critical during these events. The more time it takes to retrieve the required data from your archive, the longer it will take for the business to recover.</p>



<p><strong>Second, while data retention is a necessity, it is also a liability and entails responsibility.</strong></p>



<p>Businesses must take the responsibility to respect consumer privacy rights very seriously. Part of this responsibility entails keeping consumer data for no longer than is required or for any purpose other than that for which the consumer gave consent. Even if your company does not fall under the regulatory jurisdiction of privacy laws like <a href="https://www.gdpreu.org/gdpr-requirements/">GDPR</a>, <a href="https://oag.ca.gov/privacy/ccpa">CCPA</a>, or <a href="https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda_brief/">PIPEDA</a>, the business is nevertheless liable for securely and responsibly maintaining its consumer data.</p>



<p>With any data that is retained comes the possibility that it could be stolen, leaked, or misused. This risk is unavoidable, but preserving <em>unnecessary</em> archives of historical data is a liability that ought to be avoided.</p>



<p><strong>Third, <em>“Data stores don’t grow on trees…”</em></strong></p>



<p>A well-crafted data strategy can reduce the financial cost of maintaining your data repositories, but increasingly large data stores cost the business money nevertheless.</p>



<p>There are also performance, time, system resource, and opportunity costs associated with maintaining large data stores.</p>



<p><strong>So, in short – <em>make the commitment to tidy up your unwieldy data repositories!</em></strong></p>


<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><a href="https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-scaled.jpg"><img loading="lazy" decoding="async" width="1024" height="683" src="https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-1024x683.jpg" alt="" class="wp-image-5597" style="aspect-ratio:1.499267935578331;width:448px;height:auto" srcset="https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-1024x683.jpg 1024w, https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-300x200.jpg 300w, https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-768x512.jpg 768w, https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-1536x1024.jpg 1536w, https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-2048x1365.jpg 2048w, https://theserogroup.com/wp-content/uploads/2023/10/messydesk-3-1620x1080.jpg 1620w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>
</div>


<h2 class="wp-block-heading" id="h-rule-2-imagine-your-ideal">Rule 2: Imagine Your Ideal</h2>



<p>Compliance regulations do some of the work of envisioning the ideal for us in the data world. Still, take the time to consider the ideal composition of your data repositories. Doing this can help you to think strategically about what should be kept, how and where to keep it, and for how long.</p>



<p>As mentioned in the <a href="https://theserogroup.com/data-strategy/data-management-strategy-a-kaizen-approach/">previous</a> post in this series, consider both regulatory requirements and the needs of the business for reporting, analytics, and strategic planning when determining what to keep. Consult business leaders and business analysts about what data is needed and for how long. You can even create a formal <a href="https://blog.datahubproject.io/the-what-why-and-how-of-data-contracts-278aa7c5f294" target="_blank" rel="noreferrer noopener">data contract</a> for critical data elements in your business.</p>



<h2 class="wp-block-heading" id="h-rule-3-finish-discarding-first">Rule 3: Finish Discarding First</h2>



<p>Obviously, deleting data should always be done with extreme caution and forethought.</p>



<p>Nevertheless, once you have performed an audit of your data repositories and have determined your retention strategy, you should begin implementation by purging unnecessary data. We will discuss more about how to perform this action safely and according to best practices below. &nbsp;</p>



<h2 class="wp-block-heading" id="h-rules-4-and-5-progress-by-category-and-in-the-right-order">Rules 4 and 5: Progress by Category and in the Right Order</h2>



<p>For the purposes of purging and archiving data, we should be thinking in a criticality/age matrix like the one below – beginning in the upper left corner and working down and to the right.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://theserogroup.com/wp-content/uploads/2023/10/image.png"><img loading="lazy" decoding="async" width="568" height="236" src="https://theserogroup.com/wp-content/uploads/2023/10/image.png" alt="" class="wp-image-5589" style="aspect-ratio:2.406779661016949;width:394px;height:auto" srcset="https://theserogroup.com/wp-content/uploads/2023/10/image.png 568w, https://theserogroup.com/wp-content/uploads/2023/10/image-300x125.png 300w" sizes="auto, (max-width: 568px) 100vw, 568px" /></a></figure>
</div>


<p>You should make incremental passes <em>across</em> departments in these stages, beginning with the oldest and least important data in each departmental area.</p>



<h2 class="wp-block-heading" id="h-rule-6-does-it-spark-joy">Rule 6: “Does it Spark Joy?”</h2>



<p>Ok, ok &#8211; I admit this iconic final question from Kondo is much less suitable to data retention. However, there still may be an important (if less-than-perfectly-measurable) question that should be asked by data teams before coding a delete.</p>



<p><strong>Does a business leader strongly prefer to retain certain data despite the lack of any clear regulatory or business-driven reason for doing so?</strong></p>



<p>If so, keep it&#8230;unless doing so presents a serious risk or concern. If you feel there is a serious risk, continue to voice your concerns. Otherwise, just wait and continue to get clarification.</p>



<h2 class="wp-block-heading" id="h-now-for-some-best-practices">Now for some best practices…</h2>



<p>Keep in mind that the practices listed here do not include the critical practice of backing up your production data, which has been discussed in previous posts (<a href="https://theserogroup.com/dba/whats-in-this-sql-server-backup-file/" target="_blank" rel="noreferrer noopener">here</a>, <a href="https://theserogroup.com/sql-server/using-vm-snapshots-to-backup-sql-server/" target="_blank" rel="noreferrer noopener">here</a>, and <a href="https://theserogroup.com/azure/how-to-test-sql-server-backups-using-dbatools/" target="_blank" rel="noreferrer noopener">here</a>). Always make sure that backups are in place before beginning to archive or delete data.</p>



<h3 class="wp-block-heading" id="h-1-nbsp-nbsp-nbsp-nbsp-replicate-and-archive-data-in-flight">1.&nbsp;&nbsp;&nbsp;&nbsp; Replicate and archive data “in-flight”.</h3>



<p>Archiving and/or replicating your data at various points in your pipeline is a best practice. Process failures in data pipelines are not uncommon, and you need to be able to recover data from earlier stages of the pipeline if you need to reprocess the data.</p>



<p>3 common examples of this practice, which should include purging data after an established retention period, are:</p>



<ul class="wp-block-list">
<li>Moving imported files to an archive folder</li>



<li>Replicating transactional databases to a staging database before further processing by downstream systems</li>



<li>Staging imported API data in its own table or database before integrating with internal systems.</li>
</ul>



<h3 class="wp-block-heading" id="h-2-nbsp-nbsp-nbsp-nbsp-archive-in-cold-storage-and-protect-the-archive">2.&nbsp;&nbsp;&nbsp;&nbsp; Archive in cold storage and protect the archive.</h3>



<p>Consider how you will store data that has served its immediate purpose and has been determined to be a candidate for long-term storage. There are pros and cons to each method, and a combination of archiving methods may be appropriate for your different data sets. Here are some options and considerations.</p>



<ul class="wp-block-list">
<li><strong>Onsite physical storage</strong><ul><li>Pros: ease of access in an emergency, familiar technology</li></ul>
<ul class="wp-block-list">
<li>Cons: vulnerable to tampering, theft, and physical damage/natural disasters</li>
</ul>
</li>



<li><strong>Offsite storage</strong> (tape, optical disk, magnetic hard drives)<ul><li>Pros: well-established, relatively inexpensive, usually more secure than onsite</li></ul>
<ul class="wp-block-list">
<li>Cons: slower recovery times</li>
</ul>
</li>



<li><strong>Cold cloud storage</strong><ul><li>Pros: speed and ease of recovery, alleviated burden of maintenance, built-in security, inexpensive</li></ul>
<ul class="wp-block-list">
<li>Cons: potentially less familiar to established IT departments than traditional methods</li>
</ul>
</li>



<li><strong>Data lake</strong><ul><li>Pros: accessibility, speed and ease of recovery, suitable for use with emerging technologies, built-in security</li></ul>
<ul class="wp-block-list">
<li>Cons: less-established, steeper learning curve, potentially expensive, requires careful governance of user access</li>
</ul>
</li>
</ul>



<h3 class="wp-block-heading" id="h-3-nbsp-nbsp-nbsp-nbsp-don-t-forget-critical-data-that-is-managed-by-third-parties">3.&nbsp;&nbsp;&nbsp;&nbsp; Don’t forget critical data that is managed by third parties.</h3>



<p>Over time, businesses may find that a significant body of company data resides in data stores managed by third parties. Since maintenance has often been delegated to vendors in these cases, archiving this data is sometimes overlooked.</p>



<p>While there may be an ability to set a retention policy, it can be very beneficial for many reasons to work with your vendors to set up an extraction process to archive a copy of your data from these systems into your own repositories as well.</p>



<h3 class="wp-block-heading" id="h-4-nbsp-nbsp-nbsp-nbsp-establish-retention-schedules-and-procedures-for-purging-data">4.&nbsp;&nbsp;&nbsp;&nbsp; Establish retention schedules and procedures for purging data.</h3>



<p>Business, regulatory, and legislative needs dictate what should be saved and for how long, and these may differ between data sets. Establishment of policies and procedures for deleting data will ultimately be the responsibility of data owners.</p>



<p>These policies should address the following areas:</p>



<ul class="wp-block-list">
<li>Who is authorized to purge data?</li>



<li>In what maintenance windows can this process occur since deletion can go slowly and data processing jobs/replication must be disabled?</li>



<li>How will notification be given to the business?</li>



<li>What validation and integrity checks must be in place?</li>



<li>What rollback procedures will be used if necessary?</li>
</ul>



<h2 class="wp-block-heading" id="h-just-the-tip-of-the-iceberg">Just the tip of the iceberg…</h2>



<p>There is much, much more to say about all these topics.</p>



<p>If you have made it as far as committing to cleaning up your data but the rest seems overwhelming, never fear! There are many vendors that are happy to help with all levels of assistance.</p>



<p>If you have a good handle on your archiving and deletion processes but would like assistance with a SQL Server implementation of them, <a href="https://theserogroup.com/#contact">reach out</a>! We are here to help.</p>
<p>The post <a href="https://theserogroup.com/data-strategy/archiving-and-deletion-strategy-konmari-for-data-management/">Archiving and Deletion Strategy&#8230;KonMari for Data Management?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5588</post-id>	</item>
		<item>
		<title>Data Management Strategy: A Kaizen Approach</title>
		<link>https://theserogroup.com/data-strategy/data-management-strategy-a-kaizen-approach/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Thu, 21 Sep 2023 21:17:28 +0000</pubDate>
				<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Data Management]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[Database Development]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Audit]]></category>
		<category><![CDATA[SQL Events]]></category>
		<category><![CDATA[SQL Security]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5522</guid>

					<description><![CDATA[<p>“Take time to improve our data management processes? Sorry, we are just too busy”… fixing errors from broken data processes. This refrain is more common than you think in IT departments of all sizes. Or maybe you live that reality every day and are fully aware that clunky, error-laden processes eat away at your team’s&#8230; <br /> <a class="read-more" href="https://theserogroup.com/data-strategy/data-management-strategy-a-kaizen-approach/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/data-strategy/data-management-strategy-a-kaizen-approach/">Data Management Strategy: A Kaizen Approach</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>“Take time to improve our data management processes? Sorry, we are just too busy”… <em>fixing errors from broken data processes.</em></p>



<p>This refrain is more common than you think in IT departments of all sizes. Or maybe you live that reality every day and are fully aware that clunky, error-laden processes eat away at your team’s efficiency (and morale).</p>



<h2 class="wp-block-heading">Data management and continuous improvement may sound like they should always go together, but they often don’t.</h2>



<p>Many times our data management practices involve too many business-critical data processes that break regularly and need to be improved, but we have no time to make the needed improvements because there are so many data processes that need to be “managed” (a.k.a. remediated regularly).</p>



<h2 class="wp-block-heading">So, how can we break this cycle?</h2>



<figure class="wp-block-image size-large is-resized"><a href="https://theserogroup.com/wp-content/uploads/2023/09/stop-scaled.jpg"><img loading="lazy" decoding="async" src="https://theserogroup.com/wp-content/uploads/2023/09/stop-1024x683.jpg" alt="" class="wp-image-5528" style="width:449px;height:300px" width="449" height="300" srcset="https://theserogroup.com/wp-content/uploads/2023/09/stop-1024x683.jpg 1024w, https://theserogroup.com/wp-content/uploads/2023/09/stop-300x200.jpg 300w, https://theserogroup.com/wp-content/uploads/2023/09/stop-768x513.jpg 768w, https://theserogroup.com/wp-content/uploads/2023/09/stop-1536x1025.jpg 1536w, https://theserogroup.com/wp-content/uploads/2023/09/stop-2048x1367.jpg 2048w, https://theserogroup.com/wp-content/uploads/2023/09/stop-1618x1080.jpg 1618w" sizes="auto, (max-width: 449px) 100vw, 449px" /></a></figure>



<p><strong>Welcome back</strong> to the <a href="https://theserogroup.com/data-strategy/data-collection-two-key-tools-to-improve-your-data-strategy/">second</a> of three posts on how to refine your strategy for Data Lifecycle Management (DLM)!</p>



<p>In this post, we will focus on Data Management as the second of the three DLM stages: Data Collection, Data Management, and Data Deletion.</p>



<h2 class="wp-block-heading">Kaizen for Data Management</h2>



<p>The Kaizen approach, famously championed by the Toyota corporation, suggests that small organizational changes can lead to a culture of continuous improvement. This culture will ultimately lead to better processes, greater efficiency, improved outcomes, and increased morale.</p>



<p><a href="https://kaizen.com/insights/continuous-improvement-culture/">The Kaizen Institute</a> states,</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-center"><em>“As part of the corporate culture, continuous improvement becomes an ongoing process integrated into the organization’s daily activities. Employees are encouraged to challenge the status quo, suggest ideas, and implement improvements. Continuous learning and development are valued, and mistakes are seen as growth opportunities.”</em></p>
</blockquote>



<p>This means that adopting a Kaizen approach to your data management strategy can be a lever for driving a continuous improvement culture on your data team without sweeping, drastic changes.</p>



<p>Small improvements to existing processes can slowly bring significant reduction in process failures, and improvements in efficiency, accuracy, and team morale. (<a href="https://www.cio.com/article/220369/what-is-kaizen-a-business-strategy-focused-on-improvement.html">Here</a> is a short article about applying a Kaizen approach in an IT context.)</p>



<h2 class="wp-block-heading">So, how and where can we improve our data management?</h2>



<p>Where should you look to start identifying small improvements that might be implemented?</p>



<p>Consider the areas below with your team. Most likely you will find that you are very strong in some areas, but perhaps there are areas that have not been addressed at all. Start with the lowest hanging fruit, and bit by bit you will find that you are slowly filling the gaps and addressing the technical debt that every established data team faces. &nbsp;</p>



<h2 class="wp-block-heading">First, analyze your data structures.</h2>



<p>The applications, tools, and data processes in place for your company will impact the data structure that needs to be in place for it to be usable. Unfortunately, these requirements rarely align.</p>



<p>When you think about the flow of your data, think about consistency of format and type. As data flows into your system, it is often riddled with discrepancies in format, data type, and even the information it contains (but we will save that for another post).</p>



<p>As your data flows downstream toward the consumer, it should become more and more aligned in these areas. Why? Because the more points of contact that your technical teams must have with it (to transform it for particular use cases, etc.), the more points of failure you can have.</p>



<p>Strategic policies and governance and centralized data management can really help, but you don’t need an operational overhaul to improve!</p>



<h2 class="wp-block-heading">In line with a Kaizen approach, try encouraging small changes in these areas:</h2>



<h3 class="wp-block-heading" id="h-establish-data-standards">Establish data standards</h3>



<p>This will be an ongoing process. You will want to give thought to what your core standards should be, especially for mission critical data elements like identifiers, account numbers, etc., since these are more difficult to change once processes are mature. However, your standards will expand and refine as your business matures its data processes.</p>



<h3 class="wp-block-heading" id="h-adopt-an-enterprise-modeling-tool">Adopt an enterprise modeling tool</h3>



<p>Document and catalog your data standards using a modeling tool. Include all the metadata associated with your data objects and their relationships. The business will use the resulting documentation at every level (system administration, development, business analysis, and report consumption) for understanding and interpreting the data.</p>



<h3 class="wp-block-heading" id="h-transform-your-data-with-consistency">Transform your data with consistency</h3>



<p>Wherever your transformation layer lives in your processes (and hopefully there are as few of these as possible), always architect toward your established data standards.</p>



<p>Establishing governance and centralized management can really help here, but feel free to start small! Apply these principles to new processes and only to established processes as they require other changes. Encourage a culture that celebrates these improvements and looks for opportunities to make things better.</p>



<h3 class="wp-block-heading" id="h-implement-database-source-control">Implement database source control</h3>



<p>That’s right – employ a source control process for your database objects. Many companies do not take this step. However, having source control in place does not only protect your team from losing important data objects. It can also help ensure that new structures follow established standards when code reviews, pull request approvals, and other best practices are in place.</p>



<h3 class="wp-block-heading" id="h-structure-your-deployment-process">Structure your deployment process</h3>



<p>Lastly, establish protocols around deployment. Some options include:</p>



<ul class="wp-block-list">
<li>Creating a deployment cadence that uses established deployment windows</li>



<li>Setting up a change advisory board for reviewing changes before approving them to be deployed to production</li>



<li>Designating deployment managers that are responsible for deploying code</li>



<li>And, of course, you can always automate your deployments! Just be careful to include the appropriate guardrails.</li>
</ul>



<p><em>Remember – slow and steady wins the race with continuous improvement.</em></p>



<h2 class="wp-block-heading">Second, evaluate your data pipelines.</h2>



<p>Outside of data structure, there are other data process considerations that need to be evaluated as well.</p>



<h3 class="wp-block-heading">Accuracy &amp; Reliability</h3>



<ul class="wp-block-list">
<li>Are your data ingestion and replication processes accurate and reliable?</li>
</ul>



<p>Sometimes when evaluating our pipelines, we find that issues with error handling, purge processes, SFTP, APIs, replication, logging or any number of other processes are causing duplicative, inaccurate, incomplete, or undelivered data transfers. Look out for these and correct them as you find them.</p>



<h3 class="wp-block-heading">Maintenance &amp; Scalability</h3>



<p>Also, ask yourself these questions:</p>



<ul class="wp-block-list">
<li>Are you frequently stretching the limits of any of your allocated hardware, VMs, databases, or network resources?</li>



<li>Are any of your system resources in need of upgrades or patching? Are you missing protocols to ensure that these are completed?</li>



<li>Are there other systems, applications, technologies, or vendors that might suit your current or projected needs better?</li>



<li>Are your data processes too slow? Do they struggle with the amount of data that must be processed by them?</li>
</ul>



<p>If your answer to any of these questions is “yes”, then you have opportunities for improvement (<em>slow and steady…</em>).</p>



<h2 class="wp-block-heading">Third, never forget about security with data management.</h2>



<p><a href="https://theserogroup.com/sql-server-resources/sql-server-security-best-practices/">Security</a> should always be top of mind when considering your company data. Here are some areas to evaluate.</p>



<h3 class="wp-block-heading">Security – External</h3>



<p>Review the security around the infrastructure supporting your company’s data processes for points of external connection. Pay particular attention to any processes that utilize third party tools or that export or extract data to/from external sources.</p>



<h3 class="wp-block-heading">Security – Internal &nbsp;&nbsp;</h3>



<p>For internal sharing and usage, security measures should be concerned with careful provisioning of access to data and systems. For lower-level systems, be sure to mask or de-identify any sensitive data.</p>



<p>Further, for sensitive or confidential data, give careful consideration to protecting against any intentional or unintentional data leaks. Areas to consider creating policies around include:</p>



<ul class="wp-block-list">
<li>Unsecured physical devices or paperwork</li>



<li>Keeping only what data is necessary</li>



<li>Emailing sensitive data</li>



<li>Downloading data to personal devices</li>



<li>What to do if a suspected data breach has occurred</li>
</ul>



<h2 class="wp-block-heading">Is your head spinning? Don’t worry!</h2>



<p>Remember that data management is an ongoing process of continuous improvement, and we will delve into many of these topics more deeply in upcoming posts.</p>



<p>In the meantime, if you have a pressing need and could use some help detailing a roadmap, <a href="https://theserogroup.com/#contact">let us know</a>! We love to help empower continuous improvement with our clients.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-center"><em>&#8220;We cannot become what we want to be by remaining what we are.&#8221;</em></p>
</blockquote>



<p class="has-text-align-center"><em>&#8211; Max DePree</em></p>
<p>The post <a href="https://theserogroup.com/data-strategy/data-management-strategy-a-kaizen-approach/">Data Management Strategy: A Kaizen Approach</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5522</post-id>	</item>
		<item>
		<title>Data Collection: Two Key Tools to Improve Your Data Strategy</title>
		<link>https://theserogroup.com/data-strategy/data-collection-two-key-tools-to-improve-your-data-strategy/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Tue, 29 Aug 2023 13:01:31 +0000</pubDate>
				<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Data Collection]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Consultant]]></category>
		<category><![CDATA[SQL Events]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5460</guid>

					<description><![CDATA[<p>Are your company&#8217;s data collection processes sound? Do they align with best practices? Welcome to the first of three posts on how to refine your strategy for data lifecycle management. In this post, we will look at how to evaluate your data collection processes for improvements. Data Collection in Data LifeCycle Management (DLM) As has&#8230; <br /> <a class="read-more" href="https://theserogroup.com/data-strategy/data-collection-two-key-tools-to-improve-your-data-strategy/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/data-strategy/data-collection-two-key-tools-to-improve-your-data-strategy/">Data Collection: Two Key Tools to Improve Your Data Strategy</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Are your company&#8217;s data collection processes sound? Do they align with best practices?</p>



<p>Welcome to the first of three posts on how to refine your strategy for data lifecycle management. In this post, we will look at<strong> </strong>how to evaluate your<strong> data collection processes </strong>for improvements.</p>



<h2 class="wp-block-heading">Data Collection in Data LifeCycle Management (DLM) </h2>



<p>As has been noted in a previous post on the <a href="https://theserogroup.com/data-strategy/are-information-and-data-lifecycle-management-processes-different/" target="_blank" rel="noreferrer noopener">difference between Data Lifecycle Management (DLM) and Information Lifecycle Management(ILM)</a>, there are fundamentally 3 phases of Data Lifecycle Management (DLM) into which all physical data-related tasks fall: Data Collection &amp; Creation, Data Management, and Data Deletion.</p>



<p>As an IT leader, there are <strong>two important exercises</strong> you should perform<strong> </strong>to evaluate your data collection strategy. By performing these, you will produce living documents that should guide how your company creates, ingests, and consumes data now and in the future.</p>



<h2 class="wp-block-heading">First, perform a <em>data collection audit.</em></h2>



<p>The first step in evaluating your DLM processes is to gain a complete understanding of the data collection processes your company is currently using. The best way to do this is through an internal audit.</p>



<p>Your data collection audit should include answers to the following questions:</p>



<ul class="wp-block-list">
<li>Where is data coming into your systems (websites, transactional systems, vendors)?</li>



<li>What systems or processes are used to create or collect data (software, web forms, APIs, FTP)?</li>



<li>What data formats are being leveraged by these processes (SQL, JSON, CSV, XML)?</li>



<li>What security, threat mitigation, backups, and archiving processes are in place for these processes and data stores? &nbsp;(<a href="https://devops.com/optimizing-security-in-data-collection-processes/" target="_blank" rel="noreferrer noopener nofollow">Here</a> is a good summary of what to look for.) </li>
</ul>



<p>Finally, examine the information you gathered through a strategic lens. Look for vulnerabilities, inefficiencies, and pain points in your processes. Then work with your team to devise a strategic plan and implementation timeline for achieving improvements in these areas.</p>



<h2 class="wp-block-heading">Second, create a<em> data tracking plan</em>.</h2>



<p>Now that you have audited your data collection processes, you should give thought to why you are collecting the data that you are and what data <em>needs</em> to be collected. Consult business analysts in your company about which metrics they would like to track. This will help you understand what data points need to be collected. Likewise, find out what government regulations dictate about what data should be collected and retained.</p>



<p>Ask questions like:</p>



<ul class="wp-block-list">
<li>Is the <em>right </em>data being collected?</li>



<li>Are there any missing data points?</li>



<li>Are you collecting irrelevant or duplicated data?</li>
</ul>



<h3 class="wp-block-heading">Bridge the Gap</h3>



<p>Undoubtedly, it is tricky to bridge the gap between the business, which has ideas about what data they would <em>like</em> to track, and the technical team, who knows <em>how</em> to track it. A data tracking plan is a tool that can help with this.</p>



<p>While some people strictly define what a data tracking plan must consist of, a simple plan is often sufficient. Your data tracking plan defines your primary business objects (customers, products, stores, etc.) and the metrics or events surrounding them that your business would like to have more information about.</p>



<p>Before spending time creating your own, take a look at the many templates available to get you started. Here is a <a href="https://www.avo.app/blog/9-free-tracking-plan-templates-from-mixpanel-amplitude-segment-and-more#lle4n0ay38-" target="_blank" rel="noreferrer noopener">l</a><a href="https://www.avo.app/blog/9-free-tracking-plan-templates-from-mixpanel-amplitude-segment-and-more#lle4n0ay38-" target="_blank" rel="noreferrer noopener nofollow">ink</a> to an evaluation of a few free templates to start your research.</p>



<h3 class="wp-block-heading">Create the plan</h3>



<p>Once you have your template, start your internal planning discussions with questions like:</p>



<ul class="wp-block-list">
<li>What core business objects are we concerned with?</li>



<li>What metrics do we care about for those objects (that is, what do we want to track about them)?</li>



<li>Why do we want to track these metrics?</li>



<li>What data needs to be collected to obtain these metrics and how will it be defined?</li>



<li>Where can the data be obtained? Do we already collect it?</li>



<li>Who will govern the information once we have it?</li>



<li>Who will manage the data collection?</li>



<li>What format does the data need to be in to be useful?</li>
</ul>



<p>It can certainly be challenging on many fronts for the business and IT to come together to create a data tracking plan. However, facilitating this will be well worth the effort in terms of the clear data strategy objectives that will be produced. Avoiding the costs associated with misguided data projects will more than outweigh the time and energy spent in coordinated planning.</p>



<h2 class="wp-block-heading">Finally, update your data strategy, and implement changes.</h2>



<p>Once you have assessed your data collection processes and have identified improvements, you&#8217;ll need to assign priorities to your findings. Work with both the business and your technical team to set these priorities, as well as to build a roadmap for implementation. </p>



<p>As can be seen, the information you have gathered through cross-functional cooperation and through using these tools will help you to make a strong case to business leaders for the importance of these strategic improvements. </p>



<h2 class="wp-block-heading" id="h-want-to-learn-more">Want to learn more?</h2>



<p>Looking for more information about Data Strategy and how it can help align IT and business goals? Check out <a href="https://theserogroup.com/tag/data-strategy/">these posts</a>. </p>



<p>If you&#8217;d like to learn more about how we approach Data Strategy, or if you have some concerns about your SQL estate, give us a <a href="https://theserogroup.com/#contact" target="_blank" rel="noreferrer noopener">call</a>. We can help.</p>
<p>The post <a href="https://theserogroup.com/data-strategy/data-collection-two-key-tools-to-improve-your-data-strategy/">Data Collection: Two Key Tools to Improve Your Data Strategy</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5460</post-id>	</item>
		<item>
		<title>Are Information and Data Lifecycle Management Processes Different…and Who Cares?</title>
		<link>https://theserogroup.com/data-strategy/are-information-and-data-lifecycle-management-processes-different/</link>
		
		<dc:creator><![CDATA[Natasha Collins]]></dc:creator>
		<pubDate>Thu, 10 Aug 2023 12:50:53 +0000</pubDate>
				<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Database Administration]]></category>
		<category><![CDATA[IT Manager]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Sero]]></category>
		<category><![CDATA[Sero Group]]></category>
		<category><![CDATA[Serogroup]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Consultant]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQL Server Consultant]]></category>
		<category><![CDATA[SQL Server Management]]></category>
		<category><![CDATA[The Sero Group]]></category>
		<guid isPermaLink="false">https://theserogroup.com/?p=5437</guid>

					<description><![CDATA[<p>Are Data Lifecycle Management (DLM) and Information Lifecycle Management (ILM) just fancy terms for the exact same thing &#8211; namely, data management? Will understanding these terms actually impact your data strategy in any meaningful way? Will knowing this distinction affect your business at all? Recently, we began a series of posts on data strategy by&#8230; <br /> <a class="read-more" href="https://theserogroup.com/data-strategy/are-information-and-data-lifecycle-management-processes-different/">Read more</a></p>
<p>The post <a href="https://theserogroup.com/data-strategy/are-information-and-data-lifecycle-management-processes-different/">Are Information and Data Lifecycle Management Processes Different…and Who Cares?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Are Data Lifecycle Management (DLM) and Information Lifecycle Management (ILM) just fancy terms for the exact same thing &#8211; namely, data management? Will understanding these terms actually impact your data strategy in any meaningful way? Will knowing this distinction affect your business <em>at all</em>?</p>



<p>Recently, we began a series of posts on data strategy by asking whether your <a href="https://theserogroup.com/data-strategy/how-to-justify-it-spend-is-company-data-an-asset-or-a-utility/" target="_blank" rel="noreferrer noopener">company data is an asset or a utility</a>. We then uncovered <a href="https://theserogroup.com/data-strategy/10-data-storage-considerations-for-growing-companies/" target="_blank" rel="noreferrer noopener">10 Data Storage Considerations to Improve Your Company Data Strategy</a>. In this post, I will argue that recognizing the distinction between DLM and ILM is important for IT leaders to help their companies get the most out of their data.</p>



<p>Now, the difference between Data Lifecycle Management (DLM) and Information Lifecycle Management (ILM) might initially seem like hair-splitting. However, while DLM incorporates management policies for the <em>physical </em>aspects of data <em>as data </em>(type, size, location, age, etc.), ILM addresses the <em>content</em> of the data <em>as information </em>(its accuracy, reliability, sensitivity/confidentiality, etc.).</p>



<p>Here are some of the differences and why <em>both</em> are important to consider.</p>



<h2 class="wp-block-heading">Data Lifecycle Management</h2>



<p>At its highest level, DLM addresses data creation, data management, and data deletion.</p>



<p>Within these categories, there are many other elements of data management that are included as well:</p>



<ul class="wp-block-list">
<li>Creation / Collection</li>



<li>Classification</li>



<li>Redundancy</li>



<li>Duplication</li>



<li>Integrity</li>



<li>Usage and Availability</li>



<li>Sharing</li>



<li>Storage</li>



<li>Security</li>



<li>Archiving</li>
</ul>



<p>All these data process elements when viewed through DLM are fundamentally concerned with the physical aspects of managing data.<strong> Failing to address these areas in your data strategy is to gamble with the health of your data structures and the value of your company data.</strong></p>



<h2 class="wp-block-heading">Information Lifecycle Management</h2>



<p>On the other hand, ILM addresses issues associated with the <em>information </em>the data contains. ILM establishes policies that manage the data quality, business relevance, regulatory compliance, and legal liability of the data.</p>



<p>Many of the specific areas addressed by ILM establish protocols for processing data in a way that ensures data accuracy, reliable delivery, protection of sensitive information, and compliance with data privacy laws. Some elements of ILM can include:</p>



<ul class="wp-block-list">
<li>Data De-Identification / Masking</li>



<li>Data Quality Frameworks &amp; Audits</li>



<li>Development and QA Environment Refreshes</li>



<li>Source Control</li>



<li>Master Data Management</li>



<li>Classification and Governance</li>



<li>Sharing &amp; Usage</li>



<li>Security</li>



<li>Regulatory Compliance Audits</li>
</ul>



<p>While many elements of the data lifecycle have relevance in both management models, they are viewed through a different lens in each model.</p>



<h2 class="wp-block-heading">So&#8230;Who Cares?</h2>



<p>So, why does recognizing this distinction matter? Will your business really suffer without it?</p>



<p>The answer is that it matters because if this distinction is not recognized, the two sets of management policies can get conflated, with one getting largely ignored. <strong>When either DLM or ILM is neglected, important elements of the data lifecycle can appear to be fully managed when in fact they are only partially addressed.</strong></p>



<p>Perhaps even more importantly, when IT leaders do not see these management policies as distinct, they risk missing out on the benefits that come with using both together, like:</p>



<ul class="wp-block-list">
<li>Improved system performance</li>



<li>Increased availability and accessibility of data</li>



<li>Improved data quality</li>



<li>Increased data consistency across the organization</li>



<li>Improved recoverability</li>



<li>Increased security</li>



<li>Increased user satisfaction</li>



<li>Controlled costs</li>



<li>Improved regulatory compliance</li>
</ul>



<p>In upcoming posts, we will unpack these models further while exploring more ways to improve your data strategy.</p>



<h2 class="wp-block-heading" id="h-want-to-work-with-the-sero-group">Want to work with The SERO Group?</h2>



<p>Would you like some outside input in any of these areas? We love to work alongside IT leaders and their teams to help them establish the use of best practices in their data environments.</p>



<p>If that’s something you’d like to learn more about,&nbsp;<a href="https://theserogroup.com/#contact" target="_blank" rel="noreferrer noopener">let’s have a conversation</a>.</p>
<p>The post <a href="https://theserogroup.com/data-strategy/are-information-and-data-lifecycle-management-processes-different/">Are Information and Data Lifecycle Management Processes Different…and Who Cares?</a> appeared first on <a href="https://theserogroup.com">The SERO Group</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5437</post-id>	</item>
	</channel>
</rss>
