{"componentChunkName":"component---src-templates-works-work-js","path":"/portfolio-work-8/","result":{"data":{"site":{"siteMetadata":{"title":"Kunjal Panchal"}},"markdownRemark":{"id":"f59323cb-d4e2-5e69-af2e-953c13c4b3f8","excerpt":"","rawMarkdownBody":"\n\n<!-- :My journey:<br/>\n<iframe src=\"https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:6795392367877877760\" height=\"366\" width=\"504\" frameborder=\"0\" allowfullscreen=\"\" title=\"Embedded post\"></iframe> -->","html":"<!-- :My journey:<br/>\n<iframe src=\"https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:6795392367877877760\" height=\"366\" width=\"504\" frameborder=\"0\" allowfullscreen=\"\" title=\"Embedded post\"></iframe> -->","frontmatter":{"title":"Research PhD Intern","date":"August 02, 2024","description":"• Developed an on‑device (Android, Snapdragon 765G) inference pipeline for video processing and assembly using a visual‑language model. Leveraged PyTorch Quantization and PyTorch Mobile to achieve approximately 3× lower peak memory consumption. </br> • Refactored the visual‑language model to support statically‑typed forward passes and data‑dependent control flows, reducing inference latency by 16.67%. Additionally optimized memory consumption through operator fusion and parameter hoisting techniques."}}},"pageContext":{"slug":"/portfolio-work-8/"}},"staticQueryHashes":["3649515864","63159454"]}