// swift-tools-version:6.0
import PackageDescription
let package = Package(
name: "TestServer",
platforms: [
.macOS(.v13)
],
dependencies: [
// 💧 A server-side Swift web framework.
.package(url: "https://github.com/vapor/vapor.git", from: "4.111.0"),
.package(url: "https://github.com/vapor/fluent", from: "4.12.0"),
.package(url: "https://github.com/vapor/fluent-sqlite-driver", from: "4.8.0"),
.package(url: "https://github.com/vapor/sql-kit", from: "3.33.2"),
.package(url: "https://github.com/lukaskubanek/LoremSwiftum", from: "2.2.3"),
.package(url: "https://github.com/vapor/fluent-postgres-driver", from:"2.10.0"),
.package(url: "https://github.com/vapor/jwt", from: "5.1.2"),
.package(url: "https://github.com/vapor/queues-redis-driver.git", from: "1.0.0"),
],
targets: [
.target(
name: "App",
dependencies: [
.product(name: "Vapor", package: "vapor"),
.product(name: "Fluent", package: "fluent"),
.product(name: "FluentSQLiteDriver", package: "fluent-sqlite-driver"),
.product(name: "SQLKit", package: "sql-kit"),
.product(name: "LoremSwiftum", package: "LoremSwiftum"),
.product(name: "JWT", package: "jwt"),
.product(name: "FluentPostgresDriver", package: "fluent-postgres-driver"),
.product(name: "QueuesRedisDriver", package: "queues-redis-driver")
],
swiftSettings: [
// Enable better optimizations when building in Release configuration. Despite the use of
// the `.unsafeFlags` construct required by SwiftPM, this flag is recommended for Release
// builds. See <https://github.com/swift-server/guides#building-for-production> for details.
.unsafeFlags(["-cross-module-optimization"], .when(configuration: .release))
]
),
.executableTarget(name: "Run", dependencies: [
.target(name: "App")
]
),
.testTarget(name: "AppTests", dependencies: [
.target(name: "App"),
.product(name: "XCTVapor", package: "vapor")
])
]
)
Create an AsyncScheduledJob
import Foundation
import Vapor
import Queues
struct ScheduledJobs: AsyncScheduledJob {
// Add extra services here via dependency injection, if you need them.
func run(context: QueueContext) async throws {
context.logger.info("Starting ScheduledJobs")
print("✅ It is called")
//Call other services using context.application.client
// context.application.client
context.logger.info("ScheduledJobs completed")
}
}
Hope my articles helps you who want to run scheduled job using Redis. Please like my post or leave a comment it helps me continue share my knowledge for free.
This post is for whom want to use AWS Lambda with OpenAPI Generator. Official guide is useful but I felt there are some missing information. So I wrote this post. You can successfully run AWS Lambda function on your local machine and debug your code.
Let’s start from very simple example. You only need 4 files. I’ll explain details. (Ignore Tests folder, no need it in this tutorial)
Package.swift
NativeMobileServer.swift
openapi.yaml
openapi-generator-config.yaml
Step 1. Define OpenAPI Spec
This OpenAPI spec is for tutorial.
Please check folder and file structure. Create an openapi.yaml
openapi: 3.1.0
info:
title: MobileJobService
version: 1.0.0
paths:
/jobs/fetch:
post:
summary: Fetch job data from external source
description: >
This endpoint is called by a scheduled Lambda or backend service.
It fetches job data from the given URL and processes it using the provided prompt.
operationId: fetchJobs
tags:
- jobs
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/FetchJobsRequest'
responses:
'200':
description: Successfully fetched and processed job data
content:
application/json:
schema:
$ref: '#/components/schemas/JobListResponse'
'400':
description: Invalid input parameters
'500':
description: Internal error during job fetch or processing
components:
schemas:
FetchJobsRequest:
type: object
required:
- url
- prompt
properties:
url:
type: string
format: uri
description: Target URL to scrape or fetch job data from
prompt:
type: string
description: Instruction / extraction prompt used to parse the fetched page
JobListResponse:
type: array
items:
$ref: '#/components/schemas/Job'
Job:
type: object
required:
- id
- title
properties:
id:
type: string
description: Unique identifier for the job
title:
type: string
description: Job title
country:
type: string
description: Country code (e.g. SG, US, TW)
city:
type: string
description: City name (e.g. Singapore)
postedAt:
type: string
format: date-time
description: When this job was posted, if known
company:
type: string
description: Company name
team:
type: string
description: Team / department (e.g. Mobile, Backend, Growth)
jobDescriptionLink:
type: string
format: uri
description: Public link to full job description
jobApplyLink:
type: string
format: uri
description: Public link to apply
salary:
$ref: '#/components/schemas/Salary'
description:
type: string
description: Cleaned / extracted full-text description for the role
Salary:
type: object
properties:
min:
type: number
description: Minimum compensation
max:
type: number
description: Maximum compensation
basis:
type: string
enum: [year, month]
description: Salary period basis (yearly or monthly)
And I can see the results from AWS Lambda function
Conclusion
Swift is very powerful for developing server-side applications. There a lot of great open source projects. In Part 2, I’ll explain how to deploy AWS Lambda Swift function to the AWS using SAM CLI.
Swift Format is made by Apple. If your Xcode Version is latest version (after Xcode 16), you don’t need to install it. Toolchain contains swift format.
Security topics are common interview question. I summarized what is Certificate, Provisioning, Code Signing, App Transport Security and CryptoKit.
Overview
Use the Security framework to protect information, establish trust, and control access to software. Broadly, security services support these goals:
Establish a user’s identity (authentication) and then selectively grant access to resources (authorization).
Secure data, both on disk and in motion across a network connection.
Ensure the validity of code to be executed for a particular purpose.
As shown in the image below, you can also use lower level cryptographic resources to create new secure services. Cryptography is difficult and the cost of bugs typically so high that it’s rarely a good idea to implement your own cryptography solution. Rely on the Security framework when you need cryptography in your app.
Digital certificates can be used to securely identify a client or server, and to encrypt the communication between them using the public and private key pair.
A certificate contains a public key, information about the client (or server), and is signed (verified) by a CA.
A certificate and its associated private key are known as an identity. Certificates can be freely distributed, but identities must be kept secure. The freely distributed certificate, and especially its public key, are used for encryption that can be decrypted only by the matching private key. The private key part of an identity is stored as a PKCS #12 identity certificate (.p12) file and encrypted with another key that’s protected by a passphrase. An identity can be used for authentication (such as 802.1X EAP-TLS), signing, or encryption (such as S/MIME).
The certificate and identity formats Apple devices support are:
Certificate: .cer, .crt, .der, X.509 certificates with RSA keys
If a certificate has been issued from a CA whose root isn’t in the list of trusted root certificates, iOS, iPadOS, macOS, or visionOS won’t trust the certificate. This is often the case with enterprise-issuing CAs. To establish trust, use the method described in certificate deployment. This sets the trust anchor at the certificate being deployed. For multitiered public key infrastructures, it may be necessary to establish trust not only with the root certificate, but also with any intermediates in the chain. Often, enterprise trust is configured in a single configuration profile that can be updated with your MDM solution as needed without affecting other services on the device.
Root certificates on iPhone, iPad, and Apple Vision Pro
Root certificates installed manually on an unsupervised iPhone, iPad, or Apple Vision Pro through a profile display the following warning, “Installing the certificate “name of certificate” adds it to the list of trusted certificates on your iPhone or iPad. This certificate won’t be trusted for websites until you enable it in Certificate Trust Settings.”
The user can then trust the certificate on the device by going to Settings > General > About > Certificate Trust Settings.
Note: Root certificates installed by an MDM solution or on supervised devices disable the option to change the trust settings.
A development provisioning profileallows your app to launch on devices and use certain app services during development. For an individual, a development provisioning profile allows apps signed by you to run on your registered devices. For an organization, a development provisioning profile allows apps developed by a team to be signed by any member of the team and installed on their devices.
The development provisioning profile contains:
A wildcard App ID that matches all your team’s apps or an explicit App ID that matches a single app
Specified devices associated with the team
Specified development certificates associated with the team
distribution provisioning profile
A distribution provisioning profile is a provisioning profile that authorizes your app to use certain app services and ensures that you are a known developer distributing or uploading your app. A distribution provisioning profile contains a single App ID that matches one or more of your apps and a distribution certificate. You configure the App ID indirectly through Xcode to use certain app services. Xcode enables and configures app services by setting entitlements and performing other configuration steps. Some entitlements are enabled for an App ID (stored in your developer account) and others are set in the Xcode project. When you export or upload your app, Xcode signs the app bundle with the distribution certificate referenced in the distribution provisioning profile.
Code signing is a macOS security technology that you use to certify that an app was created by you. Once an app is signed, the system can detect any change to the app—whether the change is introduced accidentally or by malicious code.
You participate in code signing as a developer when you obtain a signing identity and apply your signature to apps that you ship. A certificate authority (often Apple) vouches for your signing identity.
Note: In most cases, you can rely on Xcode’s automatic code signing, which requires only that you specify a code signing identity in the build settings for your project. This document is for readers who must go beyond automatic code signing—perhaps to troubleshoot an unusual problem, or to incorporate the codesign(1) tool into a build system.
Benefits of Code Signing
After installing a new version of a code-signed app, a user is not bothered with alerts asking again for permission to access the keychain or similar resources. As long as the new version uses the same digital signature, macOS can treat the new app exactly as it treated the previous one.
Other macOS security features, such as App Sandbox and parental controls, also depend on code signing. Specifically, code signing allows the operating system to:
Ensure that a piece of code has not been altered since it was signed. The system can detect even the smallest change, whether it was intentional (by a malicious attacker, for example) or accidental (as when a file gets corrupted). When a code signature is intact, the system can be sure the code is as the signer intended.
Identify code as coming from a specific source (a developer or signer). The code signature includes cryptographic information that unambiguously points to a particular author.
Determine whether code is trustworthy for a specific purpose. Among other things, a developer can use a code signature to state that an updated version of an app should be considered by the system to be the same app as the previous version.
Limitations of Code Signing
Code signing is one component of a complete security solution, working in concert with other technologies and techniques. It does not address every possible security issue. For example, code signing does not:
Guarantee that a piece of code is free of security vulnerabilities.
Guarantee that an app will not load unsafe or altered code—such as untrusted plug-ins—during execution.
Provide digital rights management (DRM) or copy protection technology. Code signing does not in any way hide or obscure the content of the signed code.
See Also
Read Security Overview to understand the place of code signing in the macOS security picture.
For descriptions of the command-line tools for performing code signing, see the codesign and csreq man pages.
App Transport Security
On Apple platforms, a networking security feature called App Transport Security (ATS) improves privacy and data integrity for all apps and app extensions. It does this by requiring that network connections made by your app are secured by the Transport Layer Security (TLS) protocol using reliable certificates and ciphers. ATS blocks connections that don’t meet minimum security requirements.
Use Apple CryptoKit to perform common cryptographic operations:
Compute and compare cryptographically secure digests.
Use public-key cryptography to create and evaluate digital signatures, and to perform key exchange. In addition to working with keys stored in memory, you can also use private keys stored in and managed by the Secure Enclave.
Generate symmetric keys, and use them in operations like message authentication and encryption.
Prefer CryptoKit over lower-level interfaces. CryptoKit frees your app from managing raw pointers, and automatically handles tasks that make your app more secure, like overwriting sensitive data during memory deallocation.
In my experience with Meta, I learned a lot from interview. HR and Interviewers are friendly. HR guided me how to prepare iOS interviews. It helped me a lot.
I was lucky because they didn’t ask leetcode hard problems. It was medium level and all the questions I solved before. So I solved all the coding problems but some questions I didn’t optimized well.
I think my behavioral interview didn’t go well because it ended 10 minutes early.
Interview with Amazon (Senior iOS Engineer)
I applied to senior iOS position without refer. After 1.5 month, HR reached me.
After HR conversation, I invited phone screening interview.
After 4 day of phone screening interview, I was invited to final round.
Interview process was
HR Conversation
Phone Screening Interview
1 session: 60min Leadership Principle + Coding
Final Interview (60min x 5)
1 session: 60min Leadership Principle
3 session: 60min x 3, Leadership Principle + Coding
1 session: 60+min, Mobile System Design
The interview process is very similar to Meta’s. Only difference is all the interview sessions contains Leadership Principle around 15-25 mins.
My experience with Amazon was good. They provided detailed preparation materials for the iOS position.
After 5 days of final interview, HR called me to share feedbacks from interviewers. (I think If you pass the final rounds, they might send an email not a call)
Interview with Google (iOS Engineer)
Unlike the interviews with Meta and Amazon, to be honest, I had an interview with Google a few years ago, not in 2024.
Interview process was
HR Hangout Chat
Phone Screening: 60 min, 2 coding problems
On Site
In 2020, I was failed at phone screening interview.
My interview experience with Google was good. They used Google Doc for coding interview.
I remember difficulty of coding problem was leetcode medium level.
Conclusion
Initially, I was concerned about my English communication skills. However, I successfully passed the HR screening and technical interviews, and even received positive feedback from behavioral interviews. If you’re worried about your English proficiency, I encourage you not to dwell on it too much. Instead, focus on effectively conveying your experiences, clarifying questions, and demonstrating your passion.
The coding rounds were neither excessively difficult nor too easy – they were of medium difficulty. I recommend practicing basic data structures and algorithms such as Heaps, Trees, Linked Lists, LRU caches, and Graphs. Most FAANG companies don’t require writing compilable code; they’re more interested in your communication skills, problem clarification abilities, approach to breaking down problems, and how you optimize time and space complexity.
For iOS-specific questions, unlike other companies, these tech giants are more interested in mobile system design rather than specific knowledge of the UIKit framework.
For the system design round, I strongly advise practicing extensively. Focus on drawing and explaining high-level components, their interactions, and server communication. Emphasize mobile-side system design.
In behavioral interviews, structure your responses using the STAR format, drawing from your experiences. Share your best stories if possible, and avoid repeating anecdotes in the final stages.
Although I didn’t receive an offer, I hope my experience proves helpful to those considering or preparing for similar interviews.
I’ve also created a GitHub repository to share information about companies that are friendly to native mobile development. If you know of such companies, please feel free to contribute to this resource.
Core Animation provides a general purpose system for animating views and other visual elements of your app. Core Animation is not a replacement for your app’s views. Instead, it is a technology that integrates with views to provide better performance and support for animating their content. It achieves this behavior by caching the contents of views into bitmaps that can be manipulated directly by the graphics hardware. In some cases, this caching behavior might require you to rethink how you present and manage your app’s content, but most of the time you use Core Animation without ever knowing it is there. In addition to caching view content, Core Animation also defines a way to specify arbitrary visual content, integrate that content with your views, and animate it along with everything else.
You use Core Animation to animate changes to your app’s views and visual objects. Most changes relate to modifying the properties of your visual objects. For example, you might use Core Animation to animate changes to a view’s position, size, or opacity. When you make such a change, Core Animation animates between the current value of the property and the new value you specify. You would typically not use Core Animation to replace the content of a view 60 times a second, such as in a cartoon. Instead, you use Core Animation to move a view’s content around the screen, fade that content in or out, apply arbitrary graphics transformations to the view, or change the view’s other visual attributes.
Apple Document
Layers Provide the Basis for Drawing and Animations
Layer objects are 2D surfaces organized in a 3D space and are at the heart of everything you do with Core Animation. Like views, layers manage information about the geometry, content, and visual attributes of their surfaces. Unlike views, layers do not define their own appearance. A layer merely manages the state information surrounding a bitmap. The bitmap itself can be the result of a view drawing itself or a fixed image that you specify. For this reason, the main layers you use in your app are considered to be model objects because they primarily manage data. This notion is important to remember because it affects the behavior of animations.
Apple Document
Most layers do not do any actual drawing in your app. Instead, a layer captures the content your app provides and caches it in a bitmap, which is sometimes referred to as the backing store. When you subsequently change a property of the layer, all you are doing is changing the state information associated with the layer object. When a change triggers an animation, Core Animation passes the layer’s bitmap and state information to the graphics hardware, which does the work of rendering the bitmap using the new information, as shown in Figure 1-1. Manipulating the bitmap in hardware yields much faster animations than could be done in software.
Because it manipulates a static bitmap, layer-based drawing differs significantly from more traditional view-based drawing techniques. With view-based drawing, changes to the view itself often result in a call to the view’s drawRect: method to redraw content using the new parameters. But drawing in this way is expensive because it is done using the CPU on the main thread.Core Animation avoids this expense by whenever possible by manipulating the cached bitmap in hardware to achieve the same or similar effects.
Although Core Animation uses cached content as much as possible, your app must still provide the initial content and update it from time to time. There are several ways for your app to provide a layer object with content, which are described in detail in Providing a Layer’s Contents.
Apple Document
Rendering Process
Layers Can Be Manipulated in Three Dimensions
Every layer has two transform matrices that you can use to manipulate the layer and its contents. The transform property of CALayer specifies the transforms that you want to apply both to the layer and its embedded sublayers. Normally you use this property when you want to modify the layer itself. For example, you might use that property to scale or rotate the layer or change its position temporarily. The sublayerTransform property defines additional transformations that apply only to the sublayers and is used most commonly to add a perspective visual effect to the contents of a scene.
Transforms work by multiplying coordinate values through a matrix of numbers to get new coordinates that represent the transformed versions of the original points. Because Core Animation values can be specified in three dimensions, each coordinate point has four values that must be multiplied through a four-by-four matrix, as shown in Figure 1-7. In Core Animation, the transform in the figure is represented by the CATransform3D type. Fortunately, you do not have to modify the fields of this structure directly to perform standard transformations. Core Animation provides a comprehensive set of functions for creating scale, translation, and rotation matrices and for doing matrix comparisons. In addition to manipulating transforms using functions, Core Animation extends key-value coding support to allow you to modify a transform using key paths. For a list of key paths you can modify, see CATransform3D Key Paths.
Figure 1-8 shows the matrix configurations for some of the more common transformations you can make. Multiplying any coordinate by the identity transform returns the exact same coordinate. For other transformations, how the coordinate is modified depends entirely on which matrix components you change. For example, to translate along the x-axis only, you would supply a nonzero value for the tx component of the translation matrix and leave the ty and tz values to 0. For rotations, you would provide the appropriate sine and cosine values of the target rotation angle.
Apple
Layer Tree
An app using Core Animation has three sets of layer objects. Each set of layer objects has a different role in making the content of your app appear onscreen:
Objects in the model layer tree (or simply “layer tree”) are the ones your app interacts with the most. The objects in this tree are the model objects that store the target values for any animations. Whenever you change the property of a layer, you use one of these objects.
Objects in the presentation tree contain the in-flight values for any running animations. Whereas the layer tree objects contain the target values for an animation, the objects in the presentation tree reflect the current values as they appear onscreen. You should never modify the objects in this tree. Instead, you use these objects to read current animation values, perhaps to create a new animation starting at those values.
Objects in the render tree perform the actual animations and are private to Core Animation.
Each set of layer objects is organized into a hierarchical structure like the views in your app. In fact, for an app that enables layers for all of its views, the initial structure of each tree matches the structure of the view hierarchy exactly. However, an app can add additional layer objects—that is, layers not associated with a view—into the layer hierarchy as needed. You might do this in situations to optimize your app’s performance for content that does not require all the overhead of a view. Figure 1-9 shows the breakdown of layers found in a simple iOS app. The window in the example contains a content view, which itself contains a button view and two standalone layer objects. Each view has a corresponding layer object that forms part of the layer hierarchy.
Apple
For every object in the layer tree, there is a matching object in the presentation and render trees, as shown in Figure 1-10. As was previously mentioned, apps primarily work with objects in the layer tree but may at times access objects in the presentation tree. Specifically, accessing the presentationLayer property of an object in the layer tree returns the corresponding object in the presentation tree. You might want to access that object to read the current value of a property that is in the middle of an animation.
Important: You should access objects in the presentation tree only while an animation is in flight. While an animation is in progress, the presentation tree contains the layer values as they appear onscreen at that instant. This behavior differs from the layer tree, which always reflects the last value set by your code and is equivalent to the final state of the animation.
Apple
The Relationship Between Layers and Views
Layers are not a replacement for your app’s views—that is, you cannot create a visual interface based solely on layer objects. Layers provide infrastructure for your views. Specifically, layers make it easier and more efficient to draw and animate the contents of views and maintain high frame rates while doing so. However, there are many things that layers do not do. Layers do not handle events,draw content, participate in the responder chain, or do many other things. For this reason, every app must still have one or more views to handle those kinds of interactions.
In iOS, every view is backed by a corresponding layer object but in OS X you must decide which views should have layers. In OS X v10.8 and later, it probably makes sense to add layers to all of your views. However, you are not required to do so and can still disable layers in cases where the overhead is unwarranted and unneeded. Layers do increase your app’s memory overhead somewhat but their benefits often outweigh the disadvantage, so it is always best to test the performance of your app before disabling layer support.
When you enable layer support for a view, you create what is referred to as a layer-backed view. In a layer-backed view, the system is responsible for creating the underlying layer object and for keeping that layer in sync with the view. All iOS views are layer-backed and most views in OS X are as well. However, in OS X, you can also create a layer-hosting view, which is a view where you supply the layer object yourself. For a layer-hosting view, AppKit takes a hands off approach with managing the layer and does not modify it in response to view changes.
Note: For layer-backed views, it is recommended that you manipulate the view, rather than its layer, whenever possible. In iOS, views are just a thin wrapper around layer objects, so any manipulations you make to the layer usually work just fine. But there are cases in both iOS and OS X where manipulating the layer instead of the view might not yield the desired results. Wherever possible, this document points out those pitfalls and tries to provide ways to help you work around them.
In addition to the layers associated with your views, you can also create layer objects that do not have a corresponding view. You can embed these standalone layer objects inside of any other layer object in your app, including those that are associated with a view. You typically use standalone layer objects as part of a specific optimization path. For example, if you wanted to use the same image in multiple places, you could load the image once and associate it with multiple standalone layer objects and add those objects to the layer tree. Each layer then refers to the source image rather than trying to create its own copy of that image in memory.
Some codes and contents are sourced from Udemy. This post is for personal notes where I summarize the original contents to grasp the key concepts(🎨 some images I draw it)
Bit Manipulations
There are 6 types of bit manipulations
OR
1 | 0 -> 1
1 | 1 -> 1
0 | 0 -> 0
And
1 & 0 -> 0
1 & 1 -> 1
0 & 0 -> 0
XOR (if there is only 1 then It’s result will be 1)
func is1Bit(number: Int, at: Int) -> Bool {
var checkBit = 1
checkBit <<= at
var result = number & checkBit
return result == checkBit
}
let number = 789
String(number, radix: 2)
//index 0, 2, 4, 8, 9 bit is 1
is1Bit(number: number, at: 0)
is1Bit(number: number, at: 2)
is1Bit(number: number, at: 4)
is1Bit(number: number, at: 8)
is1Bit(number: number, at: 9)
//index 1, bit is 0
is1Bit(number: number, at: 1)
Example 2. Set N-th bit as 1
var number = 789
func set1Bit(number: inout Int, at: Int) {
var checkBit = 1
checkBit <<= at
String(number, radix: 2)
String(checkBit, radix: 2)
number |= checkBit
String(number, radix: 2)
}
print("Before: \(number)")
set1Bit(number: &number, at: 1)
print("After: \(number)")
Set N-th bit as 1 is very easy.
Create checkBit
Apply OR bit operation to the number
Input number is 789
When you set bit 1 at index 1, the result will be +2
Example 3. Print and count 1’s bits
In Swift, There is convenient API to print binary from Int
String(789, radix: 2)
Alternative way, we can print all the bit information from right to left using checkBit
func printBitsAndReturn1sBits(_ number: UInt) -> Int {
var input = number
//Use unsinged Int, because signed int hold right most bit as signed information
var checkBit: UInt = 1
//MemoryLayout returns byte size of Int. It depends on architecture. 4 byte(32 bit) or 8 byte (64 bit)
//To get bits we need to multiply 8 and -1 (because index starts from 0)
let bits = MemoryLayout<UInt>.size * 8 - 1
checkBit <<= bits
var count = 0
while checkBit != 0 {
let rightBit = number & checkBit
if rightBit == checkBit {
print("1", terminator: " ")
count += 1
}
else {
print("0", terminator: " ")
}
//Right shift
checkBit >>= 1
}
return count
}
printBitsAndReturn1sBits(789)
Above approach, The time complexity is O(Number of Bit)
We can optimize it by using subtract by 1. It’s time complexity will be O(number of 1’s) -> Assume we ignore print all the bit information. Just focusing on get 1’s count.
func get1sBits(_ number: UInt) -> Int {
var input = number
var count = 0
while input != 0 {
input &= (input - 1)
count += 1
}
return count
}
get1sBits(789)
Example 4. Reverse the bits an Integer
func reversedBit(_ number: UInt) -> UInt {
var number = number
print("Input: \(number), Bits: \(String(number, radix: 2))")
var reversedNumber: UInt = 0
//Count: Get count of bit of the number, 789 has 10 bit
var count = String(number, radix: 2).count - 1
while number != 0 {
let leftMostBit = number & 1
reversedNumber = reversedNumber | leftMostBit
reversedNumber <<= 1
number >>= 1
count -= 1
}
reversedNumber <<= count
return reversedNumber
}
let result = reversedBit(789)
print("Ourput: \(result), Bits: \(String(result, radix: 2))")
This sample project demonstrates how to preserve your appʼs state information and restore the app to that previous state on subsequent launches. During a subsequent launch, restoring your interface to the previous interaction point provides continuity for the user, and lets them finish active tasks quickly.
When using your app, the user performs actions that affect the user interface. For example, the user might view a specific page of information, and after the user leaves the app, the operating system might terminate it to free up the resources it holds. The user should be able to return to where they left off — and UI state restoration is a core part of making that experience seamless.
This sample app demonstrates the use of state preservation and restoration for scenarios where the system interrupts the app. The sample project manages a set of products. Each product has a title, an image, and other metadata you can view and edit. The project shows how to preserve and restore a product in its DetailParentViewController.
The sample supports two state preservation approaches. In iOS 13 and later, apps save the state for each window scene using NSUserActivity objects. In iOS 12 and earlier, apps preserve the state of their user interfaces by saving and restoring the configuration of view controllers.
For scene-based apps, UIKit asks each scene to save its state information using an NSUserActivity object. NSUserActivity is a core part of modern state restoration with UIScene and UISceneDelegate. In your own apps, you use the activity object to store information needed to recreate your scene’s interface and restore the content of that interface. If your app doesn’t support scenes, use the view-controller-based state restoration process to preserve the state of your interface instead.
By Apple
High Level System Design
It uses Singleton DataModelManager.
All the ViewControllers access singleton DataModelManager directly to get data.
The logics are focused on how to save and restore the states using UIStateRestoring.
🌟 Interesting points
It has two scenes.
Main Scene
Image Scene
AppDelegate handles where user navigate to the scene by checking NSUserActivity
Some codes are sourced from Loony Corn’s Udemy Course (https://www.udemy.com/user/janani-ravi-2/). This post is for personal notes where I summarize the original contents to grasp the key concepts
Graph – Shortest Path Algorithms
Given a graph G with vertices V and edges E
Choose any vertex S – the source
What is the shortest path from S to a specific destination vertex D?
It is the path with the fewest hops to get from S to D
In a weighted graph – visit the neighbour which is connected by an edge with the Lowest Weight
Use a priority queue to implement this
To get the next vertex in the path, pop the element with the Lowest Weight -> It is called Greedy Algorithm
What is a Greedy Algorithm?
Greedy algorithms often fail to find the best solution
A greedy algorithm builds up a solution step by step
A every step it only optimizes for that particular step – It does not look at the overall problem
They do not operate on all the data so they may not see the Big Picture
Greedy algorithms are used for optimization problems
Greedy solutions are especially useful to find approximate solutions to very hard problems which are close to impossible to solve (Technical term NP Hard) E.g, The Traveling Salesman Problem
It’s possible to visit a vertex more than once
We check whether new distance (via the alternative route) is smaller than old distance
new distance = distance[vertex] + weight of edge[vertex, neighbour]
If new distance < original distance [neighbour] then Update the distance table. Put the vertex in Queue (Once Again)
Relaxation
Finding the shortest path in a weighted graph is called Dijkstra’s Algorithm
Shortest Path in Weighted Graph
The algorithm’s efficiency depends on how priority queue is implemented. It’s two operations – Updating the queue and Popping out from the queue determines the running time
Running Time is : O(ELogV) <- If Binary Heaps are used for priority queue
Running time is : O(E + V*V) <- If array is used for priority queue
Sorts the elements of this buffer according to areInIncreasingOrder, using a stable, adaptive merge sort. The adaptive algorithm used is Timsort, modified to perform a straight merge of the elements using a temporary buffer.
Timsort is a hybrid, stablesorting algorithm, derived from merge sort and insertion sort, designed to perform well on many kinds of real-world data. It was implemented by Tim Peters in 2002 for use in the Python programming language. The algorithm finds subsequences of the data that are already ordered (runs) and uses them to sort the remainder more efficiently. This is done by merging runs until certain criteria are fulfilled. Timsort was Python’s standard sorting algorithm from version 2.3 to version 3.10,[5] and is used to sort arrays of non-primitive type in Java SE 7,[6] on the Android platform,[7] in GNU Octave,[8] on V8,[9]Swift,[10] and inspired the sorting algorithm used in Rust.[11]
It uses techniques from Peter McIlroy’s 1993 paper “Optimistic Sorting and Information Theoretic Complexity”.
You must be logged in to post a comment.