Answers

Chapter 1

  1. A software architect needs to be aware of any technology that can help them solve problems faster and ensure they can create better quality software.
  2. Azure provides, and keeps improving, lots of components that a software architect can implement in solutions.
  3. The best software development process model depends on the kind of project, team, and budget you have. As a software architect, you need to consider all these variables and understand different process models so you can fit the environment’s needs.
  4. A software architect pays attention to any user or system requirement that can have an effect on performance, security, usability, and so on.
  5. All of them, but the non-functional requirements need to be given more attention.
  6. Design Thinking and Design Sprint are tools that help software architects define exactly what users need.
  7. User Stories are good when we want to define functional requirements. They can be written quickly and commonly deliver not only the feature required, but also the acceptance criteria for the solution.
  8. Caching, asynchronous programming, and correct object allocation.
  9. To check that the implementation is correct, a software architect compares it with models and prototypes that have already been designed and validated.

Chapter 2

  1. Vertically and horizontally.
  2. Yes, you can deploy automatically to an already-defined web app or create a new one directly using Visual Studio.
  3. To take advantage of available hardware resources by minimizing the time they remain idle.
  4. Code behavior is deterministic, so it is easy to debug. The execution flow mimics the flow of sequential code, which means it is easier to design and understand.
  5. Because the right order minimizes the number of gestures that are needed to fill in a form.
  6. Because it allows for the manipulation of path files in a way that is independent of the operating system.
  7. It can be used with several .NET Core versions, as well as with several versions of the classic .NET Framework.
  8. Console, .NET Core, .NET (5+), and .NET standard class library; ASP.NET Core, test, and microservices.

Chapter 3

  1. No, it is available for several platforms.
  2. Automatic, manual, and load test plans.
  3. Yes, they can – through Azure DevOps feeds.
  4. To manage requirements and to organize the whole development process.
  5. Epic work items represent high-level system subparts that are made up of several features.
  6. A parent-child relationship.

Chapter 4

  1. IaaS is a good option when you are migrating from an on-premise solution or if you have an infrastructure team.
  2. PaaS is the best option for fast and safe software delivery in systems where the team is focused on software development.
  3. If the solution you intend to deliver is provided by a well-known player, such as a SaaS, you should consider using it.
  4. Serverless is an option when you are building a new system if you don’t have people who specialize in infrastructure, and you don’t want to worry about the infrastructure for scaling.
  5. Azure SQL Server Database can be up in minutes, and you will have all the power of Microsoft SQL Server afterward. More than that, Microsoft will handle the database server infrastructure.
  6. Azure provides a set of services called Azure Cognitive Services. These services provide solutions for vision, speech, language, search, and knowledge.
  7. In a hybrid scenario, you have the flexibility to decide on the best solution for each part of your system, while respecting the solution’s development path in the future.

Chapter 5

  1. The modularity of code and deployment modularity.
  2. No. Other important advantages include handling the development team and the whole CI/CD cycle well, and the possibility of mixing heterogeneous technologies easily and effectively.
  3. A library that helps us implement resilient communication.
  4. Once you’ve installed Docker on your development machine, you can develop, debug, and deploy Dockerized .NET applications. You can also add Docker images to Service Fabric applications that are being handled with Visual Studio.
  5. Orchestrators are software that manage microservices and nodes in microservice clusters. Azure supports two relevant orchestrators: Azure Kubernetes Service and Azure Service Fabric.
  6. Because it decouples the actors that take place in a communication.
  7. A message broker. It takes care of service-to-service communication and events.
  8. The same message can be received several times because the sender doesn’t receive a confirmation of reception before its time-out period, and so the sender resends the message again. Therefore, the effect of receiving a single message once, or several times, must be the same.

Chapter 6

  1. Services are needed to dispatch communication to pods, since a pod has no stable IP address.
  2. Kubernetes offers higher-level entities called Ingresses that are built on top of services, to empower clusters with all the advanced capabilities offered by a web server, such as routing HTTP/HTTPS URLs from outside the cluster to internal service URLs inside the cluster.
  3. Helm charts are a way to organize the templating and installation of complex Kubernetes applications that contain several .yaml files.
  4. Yes, with the --- separator.
  5. It detects cluster node faults and uses livenessProbe exposed by containers.
  6. Because Pods, having no stable location, can’t rely on the storage of the node where they are currently running.
  7. StatefulSet is assumed to have state and achieve write/update parallelism through sharding, while ReplicaSet has no state, so, being indistinguishable, can achieve parallelism by splitting the load.

Chapter 7

  1. With the help of database-dependent providers.
  2. Either by calling them Id or by decorating them with the Key attribute. This can also be done with a fluent configuration approach.
  3. With the MaxLength and MinLength attributes, or with their equivalent fluent configuration methods.
  4. With something similar to builder.Entity<Package>().HasIndex(m => m.Name);.
  5. With something similar to:
    builder.Entity<Destination>()
    .HasMany(m => m.Packages)
    .WithOne(m => m.MyDestination)
    .HasForeignKey(m => m.DestinationId)
    .OnDelete(DeleteBehavior.Cascade);
    
  6. With Add-Migration and Update-Database in the package-manager console, or with dotnet ef migrations add and dotnet ef database update in the operating system console.
  7. No, but you can forcefully include them with the Include LINQ clause or by using the UseLazyLoadingProxies option when configuring your DbContext. With Include related, entities are loaded together with the main entities, while with UseLazyLoadingProxies, related entities are lazy-loaded, that is, they are loaded as soon as they are required.
  8. Yes, it is, thanks to the Select LINQ clause.
  9. By calling context.Database.Migrate().

Chapter 8

  1. Redis is a distributed in-memory storage based on key-value pairs and supports distributed queuing. Its most well-known usage is for distributed caching, but it can be used as an alternative to relational databases since it is able to persist data to disk.
  2. Yes, they are. Most of this chapter’s sections are dedicated to explaining why.
  3. Write operations.
  4. The main weaknesses of NoSQL databases are their consistency and transactions, while their main advantage is performance, especially when it comes to handling distributed writes.
  5. Eventual, Consistency Prefix, Session, Bounded Staleness, Strong.
  6. No, they are not efficient in a distributed environment. GUID-based strings perform better since their uniqueness is automatic and doesn’t require synchronization operations.
  7. OwnsMany and OwnsOne.
  8. Yes, they can. Once you use SelectMany, indices can be used to search for nested objects.

Chapter 9

  1. Azure Functions is an Azure PaaS component that allows you to implement FaaS solutions.
  2. You can program Azure Functions in different languages, such as C#, F#, PHP, Python, and Node. You can also create functions using the Azure portal and Visual Studio Code. Additional stacks can be used by using custom handlers: https://docs.microsoft.com/en-au/azure/azure-functions/functions-custom-handlers
  3. There are two plan options in Azure Functions. The first plan is the Consumption Plan, where you are charged according to the amount you use. The second plan is the App Service Plan, where you share your App Service resources with the function’s needs.
  4. The process of deploying functions in Visual Studio is the same as in web app deployment.
  5. There are lots of ways we can trigger Azure Functions, such as using Blob Storage, Cosmos DB, Event Grid, Event Hubs, HTTP, Microsoft Graph Events, Queue storage, Service Bus, Timer, and Webhooks.
  6. Azure Functions v1 needs the .NET Framework Engine, whereas v2 needs .NET Core 2.2, and v3 needs .NET Core 3.1 and .NET 5-6.
  7. The execution of every Azure function can be monitored by Application Insights. Here, you can check the time it took to process, resource usage, errors, and exceptions that happened in each function call.
  8. They are functions that will let us write stateful workflows, managing the state behind the scenes.

Chapter 10

  1. Design patterns are good solutions to common problems in software development.
  2. While design patterns give you code implementation for typical problems we face in development, design principles help you select the best options when it comes to implementing the software architecture.
  3. The Builder pattern will help you generate sophisticated objects without the need to define them in the class you are going to use them in.
  4. The Factory pattern is really useful in situations where you have multiple kinds of objects from the same abstraction, and you don’t know which of them needs to be created at compile time.
  5. The Singleton pattern is useful when you need a class that has only one instance during the software’s execution.
  6. The Proxy pattern is used when you need to provide an object that controls access to another object.
  7. The Command pattern is used when you need to execute a command that will affect the behavior of an object.
  8. The Publisher/Subscriber pattern is useful when you need to provide information about an object to a group of other objects.
  9. The DI pattern is useful if you want to implement the Inversion of Control principle. Instead of creating instances of the objects that the component depends on, you just need to define their dependencies, declare their interfaces, and enable the reception of the objects by injection. You can do this by using the constructor of the class to receive the objects, tagging some class properties to receive the objects, or defining an interface with a method to inject all the necessary components.

Chapter 11

  1. Changes in the language used by experts and changes in the meaning of words.
  2. Domain mapping.
  3. No; the whole communication passes through the entity, that is, the aggregate root.
  4. Because aggregates represent part-subpart hierarchies.
  5. Just one, since repositories are aggregate-centric.
  6. The application layer manipulates repository interfaces. Repository implementations are registered in the dependency injection engine.
  7. To coordinate operations on several aggregates in single transactions.
  8. The specifications for updates and queries are usually quite different, especially in simple CRUD systems. The reason for its strongest form is mainly the optimization of query response times.
  9. Dependency injection.
  10. No; a serious impact analysis must be performed so that we can adopt it.

Chapter 12

  1. No, since you will have lots of duplicate code in this approach, which will cause difficulties when it comes to maintenance.
  2. The best approach for code reuse is creating libraries.
  3. Yes. You can find components that have already been created in the libraries you’ve created before and then increase these libraries by creating new components that can be reused in the future.
  4. The .NET Standard is a specification that allows compatibility between different frameworks of .NET, from .NET Framework to Unity. .NET Core is one .NET implementation and is open source.
  5. By creating a .NET Standard library, you will be able to use it in different .NET implementations, such as .NET Core, .NET, the .NET Framework, and Xamarin.
  6. You can enable code reuse using object-oriented principles (inheritance, encapsulation, abstraction, and polymorphism).
  7. Generics is a sophisticated implementation that simplifies how objects with the same characteristics are treated, by defining a placeholder that will be replaced with the specific type at compile time.
  8. The answer for this question is well explained by Immo Landwerth on the .NET blog: https://devblogs.microsoft.com/dotnet/the-future-of-net-standard/. The basic answer is that .NET versions 5 and above need to be thought of as the foundation for sharing code moving forward.
  9. When you are refactoring a code, you are writing it in a better way, respecting the contract of input and output of data that this code will process.

Chapter 13

  1. No, since this would violate the principle that a service reaction to a request must depend on the request itself, and not on other messages/requests that had previously been exchanged with the client.
  2. No, since this would violate the interoperability constraint.
  3. Yes, it can. The primary action of a POST must be creation, but deletion can be performed as a side-effect.
  4. Three, they are: Base64 encoding of the header, Base64 encoding of the body, and the signature.
  5. From the request body.
  6. The ApiController attribute sets up some default behaviors that help in the implementation of REST services.
  7. The ProducesResponseType attribute.
  8. When using API controllers, they are declared with the Route and Http<verb> attributes. When using a minimal API, they are declared in the first argument of MapGet, MapPost, … Map{Http verb}.
  9. By adding something like builder.Services.AddHttpClient<MyProxy>() in the dependency injection part of the host configuration.

Chapter 14

  1. Because using queues is the only way to avoid time-consuming blocking calls.
  2. With the import declaration.
  3. With the standard Duration message.
  4. Version compatibility and interoperability.
  5. Better horizontal scalability, and support for the Publisher/Subscriber pattern.
  6. For two reasons. The operation is very fast, and the insertion in the first queue of a communication path must necessarily be a blocking operation.
  7. With the following XML code:
    <ItemGroup>
        <Protobuf Include="Protosfile1.proto" GrpcServices="Server/Client" />
       <Protobuf Include="Protosfile2.proto" GrpcServices="Server/Client" />
       ...
    </ItemGroup>
    
  8. With channel.BasicPublish(…).
  9. With channel.WaitForConfirmsOrDie(timeout).

Chapter 15

  1. Developer error pages and developer database error pages, production error pages, hosts, HTTPS redirection, routing, authentication and authorization, and endpoint invokers.
  2. No.
  3. False. Several tag helpers can be invoked on the same tag.
  4. ModelState.IsValid.
  5. @RenderBody().
  6. We can use @RenderSection("Scripts", required: false).
  7. We can use return View("viewname", ViewModel).
  8. Three.
  9. No; there is also the ViewState dictionary.

Chapter 17

  1. It is a W3C standard: the assembly of a virtual machine running in W3C compliant browsers.
  2. A web UI where dynamic HTML is created in the browser itself.
  3. Selecting a page based on the current browser URL.
  4. A Blazor component with routes attached to it. For this reason, the Blazor router can select it.
  5. Defining the .NET namespace of a Blazor component class.
  6. A local service that takes care of storing and handling all forms-related information, such as validation errors, and changes in HTML inputs.
  7. Either OnInitialized or OnInitializedAsync.
  8. Callbacks and services.
  9. Blazor way to interact with JavaScript.
  10. Getting a reference to a component or HTML element instance.

Chapter 18

  1. Yes, and there is also a tutorial for migrating to it.
  2. Yes, and there is also a tutorial for migrating to it.
  3. When (1) there is no need to deploy in different platforms. (2) There is a huge connection to the hardware. (3) The performance provided by a web client is not accepted. (4) The place where the application will run has connectivity problems.
  4. When you need the facility to deploy to many users at the same time. When you must deploy in different platforms and connection is not a problem.
  5. Today we have Xamarin.Forms, and other platforms, such as Uno and Avalonia, and soon we will have .NET MAUI.

Chapter 19

  1. A* and alpha-beta search.
  2. Because it is very difficult to apply in practice.
  3. They can learn from examples.
  4. Yes.
  5. No, they can also converge toward local minima.
  6. (1) Define your goal, (2) Provide and prepare data, (3) Train, tune and deploy your model, and (4) Test the trained model and provide feedback on it.
  7. ML.NET is a framework that can help .NET developers deliver machine learning using C#.

Chapter 20

  1. Maintainability gives you the opportunity to deliver the software you designed quickly. It also allows you to fix bugs easily.
  2. Cyclomatic complexity is a metric that detects the number of nodes a method has. The higher the number, the worse the effect.
  3. A version control system will guarantee the integrity of your source code, giving you the opportunity to analyze the history of each modification that you’ve made.
  4. A garbage collector is a .NET Core/.Net (5+)/.NET Framework system, which monitors your application and detects objects that you aren’t using anymore. It disposes of these objects to release memory.
  5. The IDisposable interface is important firstly because it is a good pattern for deterministic cleanup. Secondly, it is required in classes that instantiate objects that need to be disposed of by the programmer since the garbage collector cannot dispose of them.
  6. .NET 6 encapsulates some design patterns in some of its libraries in a way that can guarantee safer code, such as with dependency injection and Builder.
  7. Well-written code is code that any person skilled in that programming language can handle, modify, and evolve.
  8. Roslyn is the .NET compiler that’s used for code analysis inside Visual Studio.
  9. Code analysis is a practice that considers the way the code is written to detect bad practices before compilation.
  10. Code analysis can find problems that happen even with apparently good software, such as memory leaks and bad programming practices.
  11. Roslyn is an engine that provides an API that enables analyzers to inspect your code for style, quality, maintainability, design, and other issues. This is done during design time, so you can check the mistakes before compiling your code.
  12. Visual Studio extensions are tools that have been programmed to run inside Visual Studio. These tools can help you out in some cases where Visual Studio IDE doesn’t have the appropriate feature for you to use.
  13. SonarLint, and the SonarAnalyzer.CSharp NuGet package.

Chapter 21

  1. DevOps is the approach of delivering value to the end user continuously. To do this with success, continuous integration, continuous delivery, and continuous feedback must be undertaken.
  2. Continuous integration allows you to check the quality of the software you are delivering every single time you commit a change. You can implement this by turning on this feature in Azure DevOps.
  3. Continuous delivery allows you to deploy a solution once you are sure that all the quality checks have passed the tests you designed. Azure DevOps helps you with that by providing you with relevant tools.
  4. Continuous feedback is the adoption of tools in the DevOps life cycle that enable fast feedback when it comes to performance, usability, and other aspects of the application you are developing.
  5. The build pipeline will let you run tasks for building and testing your application, while the release pipeline will give you the opportunity to define how the application will be deployed in each scenario.
  6. Application Insights is a helpful tool for monitoring the health of the system you’ve deployed, which makes it a fantastic continuous feedback tool.
  7. Test and Feedback is a tool that allows stakeholders to analyze the software you are developing and enables a connection with Azure DevOps to open tasks and even bugs.
  8. To maximize the value that the software provides for the target organization.
  9. No; it requires the acquisition of all competencies that are required to maximize the value added by the software.
  10. Because when a new user subscribes, its tenant must be created automatically, and because new software updates must be distributed to all the customer’s infrastructures.
  11. Yes; Terraform is an example.
  12. Azure Pipelines and GitHub, together with GitHub Actions, are good options for this.
  13. Your business depends on the SaaS supplier, so its reliability is fundamental.
  14. No; scalability is just as important as fault tolerance and automatic fault recovery.

Chapter 22

  1. It is an approach that makes sure that every single commit to the code repository is built and tested. This is done by frequently merging the code into a main body of code.
  2. It is an approach that makes sure that complete software improvements are automatically delivered to staging and/or production environments, after possible manual approvals steps.
  3. Yes, you can have DevOps separately and then enable Continuous Integration later. You can also have Continuous Integration enabled without Continuous Delivery enabled. Your team and process need to be ready and attentive for this to happen.
  4. You may misunderstand CI as a continuous delivery process. In this case, you may cause damage to your production environment. In the worst scenario, you can have, for example, a feature that isn’t ready but has been deployed, you can cause a stop at a bad time for your customers, or you can even suffer a bad collateral effect due to an incorrect fix.
  5. A multi-stage environment protects production from bad releases when a fully automated build and deployment pipeline is in place.
  6. Automated tests anticipate bugs and bad behaviors in preview scenarios.
  7. Pull requests allow code reviews before commits are made in the master branch.
  8. No; pull requests can help you in any development approach where you have Git as your source control.

Chapter 23

  1. Because most of the tests must be repeated after any software change occurs.
  2. Because the probability of exactly the same error occurring in a unit test and in its associated application code is very low.
  3. [Theory] is used when the test method defines several tests, while [Fact] is used when the test method defines just one test.
  4. Assert.
  5. Setup, Returns, and ReturnsAsync.
  6. Yes; with ReturnAsync.
  7. No; it depends on the complexity of the user interface and how often it changes.
  8. The ASP.NET Core pipeline isn’t executed, but inputs are passed directly to controllers.
  9. Usage of the Microsoft.AspNetCore.Mvc.Testing NuGet package.
  10. Usage of the AngleSharp NuGet package.

Download the example code files

The code bundle for the book is hosted on GitHub at https://github.com/PacktPublishing/Software-Architecture-with-C-10-and-.NET-6-3E.

Join our book’s Discord space

Join the book’s Discord workspace for a Ask me Anything session with the authors:

https://packt.link/SAcsharp10dotnet6

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.239.195