{"id":13682,"date":"2022-11-18T07:49:39","date_gmt":"2022-11-18T07:49:39","guid":{"rendered":"https:\/\/viewmyprojects.com\/winwirewp\/?p=13682"},"modified":"2023-11-28T10:13:41","modified_gmt":"2023-11-28T10:13:41","slug":"azure-data-factory-pipeline","status":"publish","type":"post","link":"https:\/\/viewmyprojects.com\/winwirewp\/blog\/azure-data-factory-pipeline\/","title":{"rendered":"How to create a metadata-driven Azure Data Factory Pipeline"},"content":{"rendered":"\n<p>When we want to copy huge amounts of objects (for example, thousands of tables) or load data from variety of sources, the appropriate approach is to input the name list of the objects with required copy behaviors in a control table, and then use parameterized pipelines to read the same from the control table and apply them to the jobs accordingly. By doing so, we can maintain (for example, add\/remove) the objects list to be copied easily by just updating the object names in control table instead of redeploying the pipelines. What\u2019s more, we will have single place to easily check which objects copied by which pipelines\/triggers with defined copy behaviors.<\/p>\n\n\n\n<p>Copy data tool in <a href=\"https:\/\/viewmyprojects.com\/winwirewp\/blog\/load-data-from-sql-server-on-prem-to-azure-sql-database-using-azure-data-factory\/\">Azure Data Factory<\/a> (ADF) eases the journey of building such metadata driven data copy pipelines. After we go through an intuitive flow from a wizard-based experience, the tool can generate parameterized pipelines and SQL scripts for we to create external control tables accordingly. After we run the generated scripts to create the control table in our SQL database, our Azure Data Factory pipeline will read the metadata from the control table and apply them on the copy jobs automatically.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Prerequisites to Create Azure Data Factory Pipeline<\/strong><\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"blog-detail-list\">ADF Account.<\/li>\n\n\n\n<li class=\"blog-detail-list\">Azure SQL database (ideally 3), however for this demo we are going to use 2 databases \u2013 one for source and one for target.<\/li>\n\n\n\n<li class=\"blog-detail-list\">Managed Identity access from ADF to SQL database\/s should be set up. <\/li>\n<\/ol>\n\n\n\n<p class=\"blog-detail-list\">               a. Create Azure Active Directory admin on the database server\/s.<br>               b. Login using active directory account in SSMS and connect to the respective databases.<br>               c. Run the following commands<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>CREATE USER<\/strong> [adf account name] <strong>FROM EXTERNAL PROVIDER;<\/strong><\/p>\n\n\n\n<p><strong>ALTER ROLE<\/strong> [db_owner] <strong>ADD<\/strong> MEMBER [adf account name];<\/p>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>Sample table and data in source database. We used the sample data feature while provisioning the source <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/azure-sql\/database\/sql-database-paas-overview?view=azuresql\" target=\"_blank\" rel=\"noopener\">Azure SQL database<\/a>.<\/li>\n\n\n\n<li>Enable the firewall rules on source\/target SQL Servers to allow ADF activities read data from the tables.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Demo (Copy data from Azure SQL database to Azure SQL database)<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"blog-detail-list\">Login to the adf account and navigate to home page. Click on Ingest icon.<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-21-1.webp\" alt=\"Azure Data Factory Pipeline\" class=\"wp-image-13683\" style=\"width:432px;height:332px\"\/><\/figure>\n\n\n\n<p class=\"blog-detail-list\">2. Select Metadata \u2013 driven copy task (Preview)<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-22.webp\" alt=\"Azure Data Factory Pipeline\" class=\"wp-image-13684\" style=\"width:465px;height:267px\"\/><\/figure>\n\n\n\n<p class=\"blog-detail-list\">3. Select the control table data store. We might have to create a linked service if not created already.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Specify the table name in the boxes below. <strong>This will be the control table where all the metadata information will be stored.<\/strong><\/p>\n\n\n\n<p>It could be an existing table or a new table.<\/p>\n\n\n\n<p>NOTE \u2013 If it\u2019s a new table, it must be created inside the database where control table will be stored. The script will be available for download in the last step of this wizard (Step # 10).<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-23.webp\" alt=\"Azure Data Factory Pipeline\" class=\"wp-image-13685\" style=\"width:498px;height:249px\"\/><\/figure>\n\n\n\n<p class=\"blog-detail-list\">4. Select the source. We might have to create Linked Service if not created already<\/p>\n\n\n\n<p>a. 1 \u2013 Select source type<br>b. 2 \u2013 Select Linked Service<br>c. 3 \u2013 Select source tables<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-24.webp\" alt=\"Azure Data Factory Pipeline\" class=\"wp-image-13686\" style=\"width:363px;height:409px\"\/><\/figure>\n\n\n\n<p>After the source tables have been selected, click Next<\/p>\n\n\n\n<p class=\"blog-detail-list\">5. We will be taken to the window shown below. We could either select an option of full load or delta load for each table.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-25.webp\" alt=\"ADF\" class=\"wp-image-13687\" style=\"width:531px;height:209px\"\/><\/figure>\n\n\n\n<p class=\"blog-detail-list\">6. Here we have opted full load for one of the source tables and delta load for the second table<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-26.webp\" alt=\"ADF pipeline\" class=\"wp-image-13688\" style=\"width:656px;height:205px\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-27.webp\" alt=\"ADF pipeline\" class=\"wp-image-13689\" style=\"width:669px;height:249px\"\/><\/figure>\n\n\n\n<p class=\"blog-detail-list\">7. If the delta load option is selected, the wizard gives us the option of selecting watermark column and the start date of watermark column as shown below<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"850\" height=\"351\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/image-28.png\" alt=\"azure data factory pipeline monitoring\t\" class=\"wp-image-13690\" style=\"width:750px;height:310px\" srcset=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/image-28.png 850w, https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/image-28-300x124.png 300w, https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/image-28-768x317.png 768w\" sizes=\"auto, (max-width: 850px) 100vw, 850px\" \/><\/figure>\n\n\n\n<p class=\"blog-detail-list\">8. Select destination<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-29.webp\" alt=\"azure data factory pipeline monitoring\t\" class=\"wp-image-13691\" style=\"width:616px;height:386px\"\/><\/figure>\n\n\n\n<p>a. 1 \u2013 Destination type<br>b. 2 \u2013 Destination Linked Service<br>c. 3 \u2013 there is an option to create the table if it does not exist<br>d. 4 \u2013 we could skip column mapping if the name of the tables and schema in source vs target databases are same.<\/p>\n\n\n\n<p class=\"blog-detail-list\">9. Click next and we will be taken to the below window<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-30.webp\" alt=\"how to automate azure data factory pipeline\" class=\"wp-image-13692\" style=\"width:372px;height:349px\"\/><\/figure>\n\n\n\n<p>Here<\/p>\n\n\n\n<p>a. 1- We could change the name of the copy data activity based on project requirements.<br>b. 2- We could give task description<br>c. 3- Since more than 1 table is involved in the copy data activity, it would try to load the data in batches, and we could specify the batch size here<\/p>\n\n\n\n<p class=\"blog-detail-list\">10. Click next, do review and finish and then we will be taken to the final deployment window<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-31.webp\" alt=\"how to automate azure data factory pipeline\" class=\"wp-image-13693\" style=\"width:699px;height:348px\"\/><\/figure>\n\n\n\n<p>The wizard will give us 2 scripts<\/p>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>One for creating control table and inserting data inside it.<\/li>\n\n\n\n<li>Second one will be for creating a stored proc that would update the watermark values.<\/li>\n<\/ul>\n\n\n\n<p>Download these scripts and run it on the database where control table is supposed to exist.<\/p>\n\n\n\n<p class=\"blog-detail-list\">11. After the wizard has finished successfully, it would have created the pipelines and the datasets as shown below<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-32.webp\" alt=\"azure data factory ci cd pipeline\" class=\"wp-image-13694\"\/><\/figure><\/div>\n\n\n<p>12. Trigger the pipeline<\/p>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>A new table with data will be created in the target database.<\/li>\n<\/ul>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>If it\u2019s a delta load, the latest records from the source table would move into the target table.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Demo (Copy data from Flat file to Azure SQL database)<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Login to the adf account and navigate to home page. Click on Ingest icon.<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-33.webp\" alt=\"azure data factory ci cd pipeline\" class=\"wp-image-13695\" style=\"width:383px;height:294px\"\/><\/figure>\n\n\n\n<p>2. Select Metadata \u2013 driven copy task (Preview)<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-34.webp\" alt=\"azure data factory ci cd pipeline\" class=\"wp-image-13697\" style=\"width:441px;height:253px\"\/><\/figure>\n\n\n\n<p>3. Select the control table data store.<\/p>\n\n\n\n<p>We can use the existing control table.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-35.webp\" alt=\"how to run pipeline in ADF\" class=\"wp-image-13698\" style=\"width:600px;height:288px\"\/><\/figure>\n\n\n\n<p>4. Select the source. We might have to create Linked Service if not created already<\/p>\n\n\n\n<p>a. 1 \u2013 Select source type<br>b. 2 \u2013 Select Linked Service<br>c. 3 \u2013 Select source file\/folder<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-36.webp\" alt=\"how to run pipeline in ADF\" class=\"wp-image-13699\" style=\"width:580px;height:442px\"\/><\/figure>\n\n\n\n<p>Few points to be considered here \u2013<\/p>\n\n\n\n<p>a) We could either select a file or a folder.<br>b) If it\u2019s a folder, the schema of files inside it should be same.<br>c) This implies that we could move data into a single table.<br>d) There is no option of incrementally loading the data from the file or files.<\/p>\n\n\n\n<p>5. Select destination<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-37.webp\" alt=\"how to run pipeline in ADF\" class=\"wp-image-13700\"\/><\/figure><\/div>\n\n\n<p>a. 1 \u2013 Destination type<br>b. 2 \u2013 Destination Linked Service<br>c. 3 \u2013 there is an option to create the table if it does not exist<\/p>\n\n\n\n<p>6. Click next and we will be taken to the below window<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-38.webp\" alt=\"adf pipeline in azure\" class=\"wp-image-13701\" style=\"width:348px;height:327px\"\/><\/figure>\n\n\n\n<p>Here<\/p>\n\n\n\n<p>d. 1- We could change the name of the copy data activity based on project requirements.<br>e. 2- We could give task description<br>f. 3- number of concurrent copy tasks<\/p>\n\n\n\n<p>7. Click next, do review and finish and then we will be taken to the final deployment window<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-40.webp\" alt=\"adf pipeline in azure\" class=\"wp-image-13703\" style=\"width:655px;height:326px\"\/><\/figure>\n\n\n\n<p>The wizard will give us 1 script<\/p>\n\n\n\n<ol class=\"wp-block-list\" style=\"list-style-type:lower-alpha\">\n<li>One for creating control table and inserting data inside it.<\/li>\n<\/ol>\n\n\n\n<p>Download these scripts and run it on the database where control table is supposed to exist.<\/p>\n\n\n\n<p>8. After the wizard has finished successfully, it would have created the pipelines and the datasets as shown below<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2023\/11\/image-41.webp\" alt=\"adf pipeline in azure\" class=\"wp-image-13704\" style=\"width:424px;height:484px\"\/><\/figure>\n\n\n\n<p>9. Trigger the pipeline<\/p>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>A new table with data will be created in the target database.<\/li>\n<\/ul>\n\n\n\n<p><strong>Control Table Structure<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Name of the column<\/strong><\/td><td><strong>Description<\/strong><\/td><\/tr><tr><td>SourceObjectSettings<strong><\/strong><\/td><td>Contains detail such as source schema and object name in json format<\/td><\/tr><tr><td>SourceConnectionSettingsName<strong><\/strong><\/td><td>Source Connection Setting Name (if any)<\/td><\/tr><tr><td>CopySourceSettings<strong><\/strong><\/td><td>Contains detail such as source query name and partition options in json format<\/td><\/tr><tr><td>SinkObjectSettings<strong><\/strong><\/td><td>Contains detail such as destination schema and object name in json format<\/td><\/tr><tr><td>SinkConnectionSettingsName<strong><\/strong><\/td><td>Destination Connection Setting Name (if any)<\/td><\/tr><tr><td>CopySinkSettings<strong><\/strong><\/td><td>Consists of additional details like pre copy script in json format<\/td><\/tr><tr><td>CopyActivitySettings<strong><\/strong><\/td><td>Consists of mapping details of source vs destination in json format<\/td><\/tr><tr><td>TopLevelPipelineName<strong><\/strong><\/td><td>Name of the main pipeline that invokes sub pipeline\/s<\/td><\/tr><tr><td>TriggerName<strong><\/strong><\/td><td>Name of the trigger responsible for running the pipeline\/s<\/td><\/tr><tr><td>DataLoadingBehaviorSettings<strong><\/strong><\/td><td>Details such as watermark columns, loading behavior in json format<\/td><\/tr><tr><td>TaskId<strong><\/strong><\/td><td>Id for the copy data task<\/td><\/tr><tr><td>CopyEnabled<strong><\/strong><\/td><td>1 or 0<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Limitations Azure Data Factory Pipeline<\/strong><\/h3>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>Copy data tool does not support metadata driven ingestion for incrementally copying new files only currently. But we can bring our own parameterized pipelines to achieve that.<\/li>\n<\/ul>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>IR name, database type, file format type cannot be parameterized in ADF. For example, if we want to ingest data from both Oracle Server and SQL Server, we will need two different parameterized Azure Data Factory pipeline. But the single control table can be shared by two sets of pipelines.<\/li>\n<\/ul>\n\n\n\n<ul class=\"blog-detail-list wp-block-list\">\n<li>OPENJSON is used in generated SQL scripts by copy data tool. If we are using SQL Server to host control table, it must be <a href=\"https:\/\/viewmyprojects.com\/winwirewp\/blog\/temporal-tables-in-sql-server-2016-azure-sql-databases\/\">SQL Server 2016<\/a> (13.x) and later to support OPENJSON function.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h3>\n\n\n\n<p>We went through various steps required to implement a meta-data driven Azure Data Factory Pipeline using Azure\u2019s wizard like workflow. This would save a lot of development efforts and would be beneficial if your team is not well versed with SQL coding. Though there are some limitations with this approach as mentioned above, it should give the teams a great starting point in very less turnaround time. The teams then could customize the solution based on the project requirements.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When we want to copy huge amounts of objects (for example, thousands of tables) or load data from variety of sources, the appropriate approach is to input the name list of the objects with required copy behaviors in a control table, and then use parameterized pipelines to read the same from the control table and&hellip; <a class=\"more-link\" href=\"https:\/\/viewmyprojects.com\/winwirewp\/blog\/azure-data-factory-pipeline\/\">Continue reading <span class=\"screen-reader-text\">How to create a metadata-driven Azure Data Factory Pipeline<\/span><\/a><\/p>\n","protected":false},"author":53,"featured_media":16322,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_eb_attr":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[1,60,59],"tags":[],"class_list":["post-13682","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","category-data-and-ai-blogs","category-blogs","entry"],"acf":[],"featured_image_src":"https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1.webp","author_info":{"display_name":"Lokesh","author_link":"https:\/\/viewmyprojects.com\/winwirewp\/author\/lokesh\/"},"views":4216,"uagb_featured_image_src":{"full":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1.webp",800,440,false],"thumbnail":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1-150x150.webp",150,150,true],"medium":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1-300x165.webp",300,165,true],"medium_large":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1-768x422.webp",750,412,true],"large":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1.webp",750,413,false],"1536x1536":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1.webp",800,440,false],"2048x2048":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1.webp",800,440,false],"post-thumbnail":["https:\/\/viewmyprojects.com\/winwirewp\/wp-content\/uploads\/2022\/11\/November22_Blog_Metadata-1.webp",800,440,false]},"uagb_author_info":{"display_name":"Lokesh","author_link":"https:\/\/viewmyprojects.com\/winwirewp\/author\/lokesh\/"},"uagb_comment_info":0,"uagb_excerpt":"When we want to copy huge amounts of objects (for example, thousands of tables) or load data from variety of sources, the appropriate approach is to input the name list of the objects with required copy behaviors in a control table, and then use parameterized pipelines to read the same from the control table and&hellip;&hellip;","_links":{"self":[{"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/posts\/13682","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/users\/53"}],"replies":[{"embeddable":true,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/comments?post=13682"}],"version-history":[{"count":4,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/posts\/13682\/revisions"}],"predecessor-version":[{"id":17695,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/posts\/13682\/revisions\/17695"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/media\/16322"}],"wp:attachment":[{"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/media?parent=13682"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/categories?post=13682"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/viewmyprojects.com\/winwirewp\/wp-json\/wp\/v2\/tags?post=13682"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}