The Future of Identities
Dillon Chi, Elizabeth Costa, Charlene Joy Dela Cruz, Xun Liu, Ziyi Zhou
Our team explored how our social interactions with one another will evolve in the year 2040. We expect that technological innovations are working toward increasing global interconnectivity and multicultural blends. We questioned how emerging technologies will change the things we wear, the food we eat, and the way we control perceptions of self. With that said, we used different methodologies to explore the following domains of focus: nutrition, digital fashion, and virtual identities.
Possible Futures in 2040
The 2019 Covid recession forced people across the globe to make an abrupt shift to working from home, further blending the online and offline presence. With the rise of VR headsets, AR smart glasses and IoT devices, our team wanted to explore how physical and digital realities will blend in the year 2040. In consideration of future pandemic lockdowns and climate change scenarios, we imagined possible futures of remote social interactions. We looked at a world where technology enables individuals to have more agency over how they would want themselves to be portrayed and the reality they want to experience. Mixed reality paves the ability to create multiple identities and gives individuals the ability to simulate different social contexts and environments.
We looked into many different areas of interest when doing our first round of research. When diving deeper into what encompasses an identity and all the different ways to explore it, we continued to ask "How do we see ourselves in digital spaces and how does that impact us"? From this, we further began to expand our ability to explore different realities, wanting to attempt to truly understand our inner selves. We also explored the relationship between digital and physical identities. How will the combination of human-and-machine and human-and-body be enhanced?
Parents will keep kids inside for environmental reasons
Pandemic generation of remote work and education
Sea level is going to be rising between 5 and 16 inches by the year 2040.
Mainly caused by melting of glaciers
Thermal Expansion of ocean water
Scientists are highly confident in a rise of 1 and 8 feet in the next century.
What is it: Humans still need to be clothed. But we will see digital collections and garments free from physical and creative restrictions become part of the fashion landscape. With the roll out of 5G, clothes will function as a new interface, impacting on the way we communicate with the connected world and with each other.
Why: Fashion industry is top cause of pollution. ‘wear once, take a selfie, chuck it away’. $500 billion is lost every year in clothing under-use and waste costs; 87% of all fashion made goes to landfill.
When + Where: People use it in their daily lives and they can change its appearance on the go. This way they will always have the latest in terms of fashion and not hurt the eco.
How: Using augmented-reality glasses that overlay digital imagery onto the real world, we will be able to download content to our clothing, viewable through AR glasses, and present ourselves differently to everyone around us.
A hairstyle made from water, a dress that alters its shape according to sound: these are all possible.
Chameleon Clothing: With the nanotechnology,the textiles of material be able to react to not only a person’s body temperature but also the amount of light a person is exposed to.
What is it: 3D printed high-tech food.
Why: By 2040,the world’s population will grow to whopping 9 billion,securing food for everyone is a HUGE challenge.
3D printer could be used to shorten the food production chain drastically. Using 3D printing technology for food production can reduce food miles, agricultural land use, food waste and labor.
People will again be more interested in what they eat.Also, having your own choice of HUGE range of nutritious ingredients allows you to cater to special diets.
When + Where: Use of 3D food printing in hospitals: integration of medicine into human nutrition could be made more pleasant. Not only adapting food to nutritional needs, but also integrating individual medications into it.
Can be perfectly adapted to the patient’s nutritional needs and lead to a better recovery.
How: Coordinate the calorie tracking devices to the 3D printer and transfers the data to the 3D printer to create a customised meal. Use of lasers to cook food during printing.
What is it: Link the virtual world to reality through the concept of identity.
1) purely virtual 2) mixed 3) realist.
Why: There are things we can do in hte virutal world that we can’t in the physical world. Users can explore multiple different identities in order to find oneself. Using virtual identities to communicate, share data, and interact in computer-based (virtual) environment. For those who aren't able to have as much ‘real-life’ freedom to express themselves, virtual /digital environments become their safe spaces.
When + Where: As spaces where identity is plastic and can be played with, provide great opportunities for learning.
Children suffer less inhibition and embarrassment when learning through an avatar, for it is not them who make mistakes, but the avatar.
How: People have an immense amount of flexibility and choice in terms of establishing their virtual existence.
People use parameters to create their virtual avatars, through which to present themselves to others in their communities.
They may choose to create their virtual selves to be as similar to or as different from their real life selves as they wish in a variety of physical and other attributes, and thus may experiment with certain hidden, unexplored, or idealized aspects of their selves throughout this endeavor
As time passed, the Earth’s environment was further destroyed, and people began to relocate. 3D-printed food, self-driving vehicles has become common. Changes in the living environment will also change the way people live in the future, and their lifestyles will be very different from now. In the future, almost everyone will have their own virtual identity.
Virtual social networking and virtual goods become the mainstream, and people use smart glasses combined with VR/AR technology to view people’s virtual profiles. Based on the investigation and speculation of future technology and living situations, we have established two future personas that are suitable for living in our envisioned 2040 future world.
User Persona 01
Our persona one is Veronica, who is a 22-year-old single woman living on Earth. Due to a large amount of air pollution, she developed COPD. She therefore needs frequent and convenient access to healthcare services. She got her bachelor’s in HCI and now she is a creative futurist and loves to stay up-to-date on the latest fashion trends.
She usually expresses herself through fashion, no matter the limitation. With the popularity of 3D food-printing technology, she enjoys all her favorite foods without the hassle of a grocery store. One of her struggles now is with the impact of global overpopulation. She would like to have a child soon, but she doesn’t want to contribute to overpopulation.
User Persona 02
Another persona is named XE-3-F2. He is a 52- year-old chemical engineer currently living on Mars. He was among one of the first groups of people to move to Mars.
Up to now, he has not seen his family on Earth for three years. He misses his family very much, so spends his free time communicating with them back on Earth through any possible means. However, it’s hard to connect because it takes almost an hour to exchange one question and response. That makes him feel very distressed.
Veronica and XE-3-F2 met in a virtual online match chat. They had many topics in common and talked for a week. Then, they wanted to get to know each other better. Because they do not live in the same place, so they cannot meet physically. They decided to switch their smart glasses to VR mode and both agreed to use virtual identity to meet at a virtual restaurant on the weekend.
At the weekend, before meeting with XE-3-F2, Veronica adjusted the smart glasses to VR mode and began to dress up. She can manipulate the virtual wardrobe interface through gestures and put on a nice evening dress and fashion hats on her virtual avatar. She wanted to make her face look more attractive. So she opened the head adjustment interface and began to adjust the size and proportions of her eyes, nose, mouth, and face. After that, she opened the makeup interface and chose the preset makeup she liked. She also uses the cross face function, so she can combine her face look with any face she likes. She was happy with her look and she was ready to meet him.
She opened the virtual portal and chose to meet in the lobby of the restaurant they had arranged. They saw each other. Veronica is a glamorous and sexy beauty. And XE-3-F2 is a handsome young guy who looks like his 20s. (XE-3-F2 real age 52 years old)
They decided to order food first. So they opened the order page with their voice, checked the nutritional content of the meal, and customized the taste flavor. After they ordered the food, they quickly paid for it using online payment. In less than five minutes, their food was made from the 3D food printer.
They found a seat in the dining room, sat down, and started talking. But unlike the previous voice chat, face-to-face chat made Veronica feel bored and embarrassed. Because it was difficult to find common topics. So she started watching TikTok. But he was still talking about some boring topics.
She started to get annoyed and made chameleon mode turn on. Later, his partner called him, and he entered chameleon mode briefly. She kept getting annoyed, and she tried to block out certain aspects of his personality.
Even after blocking, he is still obnoxious. She didn’t want to stay here anymore. She was getting up to leave. However, he continued to bother her. She turned around and finally decided to block his profile. After blocking, She felt very relieved and relaxed. And he can no longer interact with her.
These interfaces didn’t change too much between iterations, the underlying structure is very similar to what our group had decided on after our initial crazy-8 exercise. Users are able to rent or buy digital clothing that would be mapped to their bodies in the virtual world. Like the food printer, they would be able to select themselves from this interface or copy something from social media. The navigational menu on the left would help them customize other elements of their digital presence.
To allow for freedom of expression in the digital realm users are able to customize what their head is perceived to be, whether that be human, animal, plant, or object. We brainstormed that users would not necessarily want to be restricted to a single binary of identity so we wanted them to be able to choose to be half horse half human or half horse half fish if they wanted to be.
Makeup - Crossface
This interface changed quite a bit from its initial ideation phase. Through development and future-casting we realized that machine learning and style transfer would be even more developed in 2040. This interface would use those technologies to allow users to select from a carousel of different looks and apply that look to their identities through style transfer machine learning algorithms.
Makeup - DIY
Users are able to switch between facial feature modalities that allow them to fine tune their look based on presets that they or others created.
Soylent Printer 3000
In the heuristic review for this interface, at first glance, the reviewers were under the impression that the Nutrition Cartridge display at the top right of the interface was actually nutritional information for the selected dish. Even though there was no selected dish in that version.
In the iteration, we amplified the idea that users were more likely to customize the seasoning or flavor of their dish. So once a dish has been selected users are able to adjust its spice, sweet, savory levels, saltiness as well as protein and carbohydrate content.
In the very first iteration of this interface, personalities were assembled using a puzzle/lego type interface. For the final iteration, we posited that long-time users were not likely to recreate personalities every time they logged on and would have recommended/saved layers from before. Here the user would select a single main personality augmentation that can be taken on from people in the public eye. These main personalities can be further augmented through various characteristics on the right.