Newest Relevant Knowledge

Articles

Achieving AI readiness through a comprehensive modernization

Generative AI is sending shockwaves through the business world, due in no small part to powerful tools transforming how we live and work. As with prior massive paradigm shifts, successful businesses must adapt for the future. When businesses infuse cutting-edge innovations like AI into their operations, it can drive sustainable, long-term growth, futureproof them against economic headwinds, and create lasting competitive advantage.

While the market dialog is dominated by incredible new AI-driven services, there is untapped value in the hundreds of millions of existing applications that can now be modernized and infused with AI

Without modernization, organizations may miss out on the full value of their investments, lag behind the competition, or fall prey to costly disruptions—and they certainly won’t be positioned to develop tomorrow’s leading AI innovations. Pitfalls like these are already a reality for many: according to the Forrester study exploring modernization, one in four business decision-makers experienced digital platform failures due to modernization challenges.

Modernization reshapes more than technology While apps and platforms are essential to modernization, businesses should also align their people, processes, and skillsets to ensure the entire enterprise is working toward the same goal. This is especially true with AI, since it creates a new way of working that disrupts long-standing routines. In other words, AI is both a catalyst for modernization and accelerator of modernization.  

I invite you to continue charting a modernization course with me in the next blog in our series. We’ll walk through specific steps for effective modernization identified by the Forrester Consulting Application Modernization Checklist that Microsoft commissioned. More Details

Ready for AI? Modernize your systems today! — Start today!

Azure Cobalt 100-based Virtual Machines are now generally available

The Cobalt 100-based VMs consist of our new general-purpose Dpsv6-series and Dplsv6-series and our memory-optimized Epsv6-series VM series. They offer up to 50% better price performance than our previous generation Arm-based VMs, making them an attractive option for a wide range of scale-out and cloud-native Linux-based workloads, including data analytics, web and application servers, open-source databases, caches, and more.

Models for generative AI are rapidly expanding in size and complexity, reflecting a prevailing trend in the industry toward ever-larger architectures. Industry-standard benchmarks and cloud-native workloads consistently push the boundaries, with models now reaching billions and even trillions of parameters. A prime example of this trend is the recent unveiling of Llama2, which boasts a staggering 70 billion parameters, marking it as MLPerf’s most significant test of generative AI to date (figure 1). This monumental leap in model size is evident when comparing it to previous industry standards such as the Large Language Model GPT-J, which pales in comparison with 10x fewer parameters. Such exponential growth underscores the evolving demands and ambitions within the AI industry, as customers strive to tackle increasingly complex tasks and generate more sophisticated outputs..

The increase in performance is important not just compared to previous generations of comparable infrastructure solutions In the MLPerf benchmarks results, Azure’s NC H100 v5 series virtual machines results are standout compared to other cloud computing submissions made. Notably, when compared to cloud offerings with smaller memory capacities per accelerator, such as those with 16GB memory per accelerator, the NC H100 v5 series VMs exhibit a substantial performance boost. With nearly six times the memory per accelerator, Azure’s purpose-built AI infrastructure series demonstrates a performance speedup of 8.6x to 11.6x (figure 3). This represents a performance increase of 50% to 100% for every byte of GPU memory, showcasing the unparalleled capacity of the NC H100 v5 series. These results underscore the series’ capacity to lead the performance standards in cloud computing, offering organizations a robust solution to address their evolving computational requirements.

Learn more about Azure generative AI.... Read More

Switch to Cobalt 100 VMs for 50% better performance! …. Explore the benefits today!

Live edit of Direct Lake models in Power BI Desktop

Direct Lake speeds up data-driven decisions by unlocking incredible performance directly against One Lake while ensuring maximum data reusability across Fabric. Last month, we announced a major update for developers working on Direct Lake models: Live editing of Power BI semantic models in Direct Lake mode. Now, you can use Power BI Desktop to edit Direct Lake semantic models, improving your data modeling experience and allowing export to Power BI Project (PBIP) for professional development workflows. With live edit every modification is applied to the semantic model in the workspace, ensuring a seamless and efficient workflow.

Start by enabling the preview feature, go to File > Options and settings > Options > Preview features and check the box next to “Live edit of Power BI semantic models in Direct Lake mode”.

By opening the model for editing in Power BI Desktop, you can directly modify the semantic model. Since it’s a Live Edit, all changes are instantly applied to the semantic model in the Fabric workspace without needing to save. Changes include all modeling tasks, such as renaming tables/columns, add/remove tables from Lakehouse, creating measures, and creating calculation groups. DAX query view is available to run DAX queries to preview data and test measures before saving them to the model.

Further details, including requirements, considerations, and limitations, can be found in the documentation. Follow the Power BI monthly feature summary blog for regular updates on this experience.... Read More

Speed up decisions with Direct Lake! Edit Power BI models live—try it now!…… Experience it now!

Add reusable custom components in Design Studio!

A custom component, also referred to as a web template, is a Power Pages site metadata record used to store template source content. Previously, adding a custom component to a web page required writing code. Now, makers can add custom components to a section of a page in the design studio in no code manner, just like other components.

With this addition, makers can now:

  • Extend Studio component library with web template based custom components
  • Create web templates and use these as components in web pages
  • Use these as reusable components across webpages in the site and provide parameters to meet their requirements

Previously, adding a custom component to a web page required writing code. Now, makers can add custom components to a section of a page in the design studio in no code manner, just like other components.

Visit make.powerpages.microsoft.com to get started. To learn more, check out the documentation. We look forward to hearing from you!... Read More

Easily add custom components in Power Pages—no coding required!.... Try it now! 

Exciting New Updates for Pipelines in Power Platform

If you as an admin want to enable your makers to discover your custom host instead of the tenant-wide platform host when they navigate to Pipelines, this is the feature you’ve been waiting for. Since a change made earlier this year, the platform host and its capabilities have become the landing pad for any maker going to the Pipelines page due to its out-of-the-box readiness. Before, if admins wanted makers to use centrally governed pipelines in a custom host instead of personal pipelines in the platform host, they would have to first set up a custom host and associate makers’ development environments with it. With a default custom host set, those with pipelines create access, and owners of existing pipelines in that custom host can associate new development environments easily by simply going to the Pipelines experience in those development environments and deploying through a pipeline in the host.

Now, when tenant admins navigate to the Deployment Pipeline Configuration app for the platform host (by clicking “Manage pipelines” from the Pipelines page in make.powerapps.com), they will see a new setting under Advanced Settings. Admins can then provide the environment ID for a custom pipelines host to use instead of the platform host when makers are using pipelines in an environment that has not yet associated with a pipelines host.

Finally, Copilot-generated deployment notes are available in 20 major languages* for Makers outside of the U.S. Now, depending on your language settings in the Power Platform page you’re deploying from, Copilot will generate deployment notes in that language. Of course, this also means that Copilot can read and process solutions that aren’t just in English! *supported languages include English, Chinese (Simplified), Czech, Danish, Dutch, Finnish, French, German, Greek, Italian, Japanese, Korean, Norwegian (Bokmål), Polish, Portuguese (Brazil), Russian, Spanish, Swedish, Thai, and Turkish.

Stay tuned for more exciting Power Platform ALM features coming later this year, and be sure to check out Microsoft Ignite taking place November 18–22, 2024 for highly anticipated announcements!... Read More

Enable makers to use your custom host for Pipelines easily — Set it up now!

Build Your Own Copilot with Azure

Learn how to build your own copilot with Azure—empowering your organization with customized, scalable, and high-performing solutions. Register today for this exclusive event.

Register Now

PARTNERING IN THE ERA OF AI

Partner with MTC on your projects. We’ll invest with you. Famous outsource value yet highly capable in broad area of skills… a 20-year Microsoft outsource Partner on the leading edge… Now on AI. Get the Microsoft AI Partnering Playbook:

Download the Playbook

Past Articles