wish helps you If you want to write the JSON response as is, you can use an HTTP connector. However, please note that the HTTP connector doesn't support pagination. If you want to keep using the REST connector and to write a csv file as output, can you please specify how you want the nested objects and arrays to be written ?
I wish did fix the issue. Since your cosmosdb has array and ADF doesn't support serialize array for cosmos db, this is the workaround I can provide. First, export all your document to json files with export json as-is (to blob or adls or file systems, any file storage). I think you already knows how to do it. In this way, each collection will have a json file.
Hope that helps "@triggerBody().folderPath" and "@triggerBody().fileName" captures the last created blob file path in event trigger. You need to map your pipeline parameter to these two trigger properties. Please follow this link to do the parameter passing and reference. .
Copy Blob Data To Sql Database in Azure Data Factory with Conditions
I wish this help you Data factory in general only moves data, it doesnt modify it. What you are trying to do might be done using a staging table in the sink sql. You should first load the json values as-is from the blob storage in the staging table, then copy it from the staging table to the real table where you need it, applying your logic to filter in the sql command used to extract it.
Azure Data factory, How to incrementally copy blob data to sql
I wish this help you You could use ADF event trigger to achieve this. Define your event trigger as 'blob created' and specify the blobPathBeginsWith and blobPathEndsWith property based on your filename pattern.