Skip to content

6. COMPUTIONAL COUTURE

Computational Couture

describes the fusion of fashion and digital technology – an interdisciplinary field in which programming, parametric design, and digital fabrication enable new forms of creativity. Garments, accessories, and materials are designed through computational methods, using data, algorithms, and 3D modeling as creative tools. In this way, the design process merges with digital workflows, giving rise to a new aesthetic.

ASSIGNMENTS

As part of this assignment, we / the participants learn to apply parametric design and digital fabrication to fashion. We develop conceptual designs, create parametric models using tools such as Grasshopper3D or Blender, and carry out a complete production workflow — from 3D modeling to 3D printing. The goal is to document the design process in a clear and traceable way and to create an independent, original outcome that makes the connection between technology and fashion visible.

REFERENCES & INSPIRATION

JULIA KOERNER

This topic was introduced by JULIA KOERNER, an award-winning architect and designer known for her pioneering work in computational design and 3D-printed fashion. Her projects bridge the worlds of architecture, fashion, and digital fabrication, inspiring us to rethink how technology can shape aesthetics and materiality.

instagram julia koerner instagram jk3d

SANTOS3D

Stephanie Santos is a multidisciplinary designer from Belgium, currently based in Lisbon, Portugal. Her work explores the intersection of fashion, technology, & biology, merging digital fabrication, parametric design,& bio-based materials to redefine how garments and materials are conceived.

Through her studio SANTOS3D, she investigates the relationship between the human body, digital processes, and living systems — as reflected in her collaborative project Fungal Fashion, which combines mycelium-based materials with computational design methods to create new, sustainable aesthetics.

Stephanie & I meet during our internship @ IRIS VAN HERPEN. Thanks to her, I was able to connect with the TextileLab Amsterdam & later the Fabricademy.

Stephanie Santos Textile-Academy

In Cooperation + Irene Irene Caretti Textile-Academy FungalFashion web FungalFashion

IRIS VAN HERPEN

Of course, IRIS VAN HERPEN must be mentioned among the most inspiring Computational Couture designers. She was one of the first — if not the first — to create 3D-printed garments, pioneering a new aesthetic that merges technology, craftsmanship, & haute couture.

Her work has fundamentally shaped the field, turning digital fabrication into a form of artistic expression & redefining what fashion can be.

BUUUUUUT!!!!! a big big BUT!

Even though my personal view of her has evolved since my internship with her atelier in 2018, I still recognize her as one of the key figures who opened the path for designers exploring the intersection of fashion & computation today.

PREVIOUS PROJECTS

RE-LINE - IN COOPERATION WITH DINESH KUMAR

Instagram: Dinesh Kumar

Instagram: House of Wearable Art

Web: House of Wearable Art

In our collection, we brought together our fields of expertise - unifying interdisciplinary product & color design in haute couture. 2 contrasting textiles are merged through 4D printing: parametric lines twist fluidly, forming an organic yet architectural structure a play of motion & transformation.

RELINE CLIP

ReLine

nude4p - beige/aprico bra & smoke11p - blue dress

For more 3D Printing on Textile: ASSIGNMENTS - WEEK 7 - Computational Couture

ASSIGNMENTS

Project Structure

For this assignment, I divided my work into three main parts:

  1. Preparing & executing existing 3D models for 3D printing – adapting previously created models, adjusting scale & geometry, & setting them up for physical fabrication.

  2. Testing various (online) tools for generating 3D files – exploring different platforms & workflows to understand how AI & digital tools can support the design process.

  3. Applying parametric design tools such as Blender & Grasshopper3D to experiment with geometry, structure, & material behavior, translating digital patterns into printable designs.

1. 3D MODELS

From Digital Bodies to Computational Couture

Building on my previous Digital Bodies assignment, I developed a series of eyewear designs derived from various 3D model variations of the ant’s body parts.

THINGIVERSE The Red Bull Ant by Kintall_John Gosper

The main challenge in this process was to connect the individual components seamlessly & adapt them to the facial structure while maintaining aesthetic balance & functionality.

To achieve this, I worked between CAD software Rhino & Blender, switching back & forth to refine geometry & adjust proportions. I converted mesh models into NURBS surfaces to allow for more precise editing & smoother modeling control.

2. 3D GENERATOR

In the second part of the assignment, I focused on testing various online tools & workflows for the generation of 3D files. The goal was to explore how AI-based image generation & digital design platforms can contribute to & expand the creative process in computational design.

  • ChatGPT: generating images through text-to-image prompts.
  • Photoshop: editing and refining the generated visuals.
  • 3D Generators / STUDIO TRIPO 3D: converting 2D images into 3D models for further development.

2.2 PICTURE GENERATOR

Exploring AI & Online Tools for 3D Generation

Design Development through AI: From Glass Forms to 3D Eyewear Concepts

In this phase, I combine ChatGPT’s text-2-image & image-2-image generation tools to develop a new series of eyewear designs. I decided to create additional glasses inspired by a glass collection FROM Veronika Beckh/ INSTAGRAM: VERONIKA BECKH that served as my main aesthetic & structural reference.

At first, the generated glasses remained quite close to conventional eyewear shapes — clean frames with limited dimensionality. To push the results further & approach the visual language I envisioned — a highly 3-dimensional form composed of overlapping, bubble-like structures — I began an iterative image generation process.

In each step, I reinserted the previously generated image back into the model, using the same set of prompts & the original reference object (the glass pieces). This repetition gradually abstracted the shapes, layering organic geometries & enhancing the sense of depth & volume.

Through this recursive workflow, the designs evolved from familiar eyewear silhouettes into speculative, sculptural objects that blur the boundaries between fashion accessory & computational artifact.

ChatGPT

After generating the initial images, I refined & adjusted them in Photoshop, improving composition, contrast, & detail to make them suitable as source material for 3D conversion.

2.2 3D GENERATOR

3D Generation & Reflection on AI-based Design Tools

Subsequently, I tested the 3D generator platform: STUDIO TRIPO 3D, which allowed me to convert the edited 2D images into 3D meshes.

The results were surprisingly accurate &detailed — as a trained product designer, I found it both fascinating & unsettling to witness how tasks that once required hours of manual modeling can now be achieved within minutes or even seconds through AI-driven tools.

Since the first generated models appeared rather flat & lacked depth, I began to combine both approaches — using image-to-3D generation together with text-to-3D prompts. This hybrid workflow produced more complex & volumetric outcomes.

To further improve the results, I also created new, more 3-dimensional images in ChatGPT, which served as enhanced inputs for the next round of 3D generation. Through this iterative process, the designs gradually became more sculptural, capturing the organic, bubble-like aesthetic I aimed for.

Adding Structure & Material Layers

Next, I developed new models with added structure. Using ChatGPT, I generated further images by combining the priviouse glass eyewear design with textile textures of KRISTY KUN / instagram kristy_kun

These hybrid visuals were then processed again in Tripo 3D, resulting in more textured & dimensional models that merged material expression with digital form generation.

Next, I refined the geometry & proportions of the eyewear model in Rhino, adjusting the size & position of the temples & scaling the entire model to its correct dimensions.

The finalized 3D file was then imported into the slicer software Ultimaker Cura, where I generated the corresponding G-code for fabrication.

During this stage, I carefully reviewed & adjusted the print parameters—including layer height, infill density, & bed & nozzle temperature—to ensure compatibility with the selected filament material.

The print showed minor imperfections, but these were fast identified & corrected, leading to a clean final result.

model: Alberto Blanco

3. PARAMETIC DESIGN

To approach the parametric design stage, I first conducted a series of short studies & watched multiple tutorials on YouTube to gain a deeper understanding of the underlying principles & workflows of parametric modeling in Blender.

Food4Rhino

1. BLENDER

Following this initial research, I began experimenting with several fundamental modifiers to explore how different parameters could influence the structure & materiality of the model.

A particularly useful reference was the tutorial:

Parametric Form Vase Modelling In Blender Tutorial - design by nadeem

Using the Decimate Modifier, I reduced the geometric complexity of the object in order to simplify the mesh & facilitate subsequent manipulations.

By applying the Wireframe & Subdivision Surface modifiers, I was able to generate & compare a variety of parametric surface structures — ranging from open, lattice-like frameworks to more continuous & fluid geometries.

2. BLENDER

Since the initial eyewear design was already highly detailed, adding further layers of geometry resulted in a visually overloaded form. Consequently, I chose to continue the parametric explorations with a more minimal eyewear model, which I had also created using Tripo.studio, drawing inspiration from the designs of DEMOBAZA.com

On this simplified model, I developed several variations with differing degrees of complexity & mesh density.

Through systematic adjustments of the Wireframe & Subdivision Surface parameters, I was able to observe how structural reduction & subdivision influence the overall aesthetic & spatial perception of the object.

3. BLENDER

As an additional exercise, I explored a more direct & intuitive approach to creating surface structures.

First, I visualized a distribution of vertices across the surface of my model to better understand its geometric topology.

In Edit Mode, I then used the selection tools - Select → Random a& - Select → Checker Deselect

— to isolate a number of vertices at irregular intervals.

By manually moving these selected points along their normals, I was able to generate a localized surface deformation, producing a subtle yet visually dynamic texture.

RHINO GRASHOPPER

Next, I began experimenting with Grasshopper for Rhino.

After watching several tutorials, I ambitiously attempted to apply a parametric surface structure directly onto my existing eyewear model.

However, it quickly became apparent that there is a reason why most tutorials & introductory examples focus on simple geometries or base surfaces rather than complex, pre-existing models.

Working with highly detailed meshes introduced additional challenges in maintaining clean topology & controllable parametric behavior, revealing the limitations of my initial approach.

To determine which geometric structures were most suitable for parametric processing in Grasshopper, I conducted a series of controlled tests using different model typologies — including open & closed polygon meshes, triangular & quadrilateral face structures, as well as NURBS & SubD geometries.

This comparative exploration allowed me to evaluate how each topology type affects data structure, surface continuity, & responsiveness to parametric transformations.

After discussing my technical challenges with Finn Tacke, a former classmate & great designer based in Germany, he developed a custom Grasshopper definition that aimed to convert my model into a Loft surface.

On this surface, a point grid was to be generated through surface parameterization, providing a base structure for curve interpolation & the subsequent creation of pipe geometries.

Unfortunately, the node network did not perform well on my model: the Loft operation simplified the form excessively, & the generated pipe structures appeared only partially or inconsistently across the surface.

Under the guidance of Miguel Castiano Fernandez & our Lab Instructors at the academy: Mar & Raúl, we further analyzed the definition & andadjusted the parameters to better adapt the node logic to the specific topology of my model.

we reconstructed the node network, simplifying the input geometry & reorganizing the data trees to ensure a cleaner & interpretable parameter flow.

Finally, the new definition began to generate visible results — not exactly as initially intended, but sufficient to demonstrate how parametric logics & data hierarchy influence the translation of complex 3D geometries into computational design workflows.

Grasshopper Definition — Step-by-Step Explanation

(read from left to right)

  1. Input & Contours Geometry: serves as the base object (Brep or Mesh). Unit Y + Number Slider: define a direction vector along the Y-axis with adjustable magnitude. Contour: generates a series of contour curves through the geometry, oriented in the specified direction and spaced according to the defined interval distance.

→ Result: a list of parallel contour curves distributed across the object.

  1. Dividing the Contours

Divide Curve: subdivides each contour curve into a number of equidistant points (controlled by the “Count” slider).

→ Result: a point grid along the contour curves, organized as a data tree (each contour representing one branch).

  1. Attractor Logic (Distance → Wave Modulation)

Point: defines the attractor point manually placed in the scene. Distance: measures the distance from each divided point to the attractor. Multiplication: scales these distance values to define wave frequency or intensity. Sine: applies a sinusoidal function to the scaled distances, generating periodic variation.

→ Result: each point receives a modulation value (ranging from –1 to +1) that will serve as an amplitude factor for the upcoming deformation.

  1. Constructing the Displacement Vector

Unit X + Number Slider: define a base vector in the X direction. Amplitude: combines this vector with the sinusoidal values, producing point-specific displacement vectors.

→ Result: each point obtains an individual displacement vector, scaled according to its distance from the attractor point.

  1. Moving Points & Reconstructing Curves

Move: shifts the contour points by their respective displacement vectors, creating wave-like point sequences. Interpolate: reconstructs smooth curves through the displaced points. Degree Slider: controls the curve’s smoothness. Periodic Boolean Toggle: closes the curve loop if activated.

→ Result: a set of modulated, flowing curves that respond to the attractor influence.

  1. Generating the Tubular Structure

Pipe: wraps a cylindrical geometry around the interpolated curves. Radius Slider: controls the pipe’s thickness. Caps: defines the pipe’s end type (flat, round, or closed).

→ Result: a three-dimensional tubular network following the modulated contour geometry.

Design Outcome & Geometric Effect

The setup slices the geometry into parallel contour curves. Along each contour, points are generated & their position is modulated by an attractor-based sine function. The displaced points are interpolated into smooth curves, which are then transformed into pipes.

→ The outcome is a periodic, wave-like relief structure — a combination of geometric logic & visual rhythm that illustrates the spatial effects of parametric modulation.

Raul further explored the complexity of Grasshopper & developed a more detailed node setup. This allowed me to gradually understand the specific function of each component more clearly, enabling me to adapt & simplify the definition independently.

Simplified Grasshopper Definition – Analysis & Intent

In this refined node setup, several components from the earlier version were intentionally removed — including the Attractor logic (Distance, Multiplication, & Sine), the Amplitude & Move operations, as well as the secondary Unit X vector used for displacement.

By stripping the definition down to its essential structure — Contours → Divide Curve → Interpolate → Pipe —

the setup focuses on the core parametric relationship between the object’s geometry & its generative curves.

This simplification eliminates the sinusoidal deformation, resulting in evenly distributed, smooth contour lines that are subsequently converted into pipes.

→ Outcome: The definition becomes easier to read, modify, and analyze, providing a clear foundation for understanding data flow, curve generation, and structural hierarchy in Grasshopper before reintroducing more complex modulation effects.

After experimenting with the parameter sliders & refining the definition, I arrived at my final result

SLICER

Preparing the Model for 3D Printing

The next step consisted of preparing the model for 3D printing. I decided to use a resin-based printer, which offers high precision & smooth surface quality — ideal for capturing the detailed geometry of the design.

To fit the model onto the printer’s build plate, I needed to divide the glasses into three separate parts.

( What seemed like a simple task at first turned into a new technical challenge: the geometry required careful adjustments to maintain alignment and watertight intersections.

Throughout this stage, I switched repeatedly between Rhino & Blender, testing different slicing & repair methods, until I finally managed to successfully segment the model in Blender for printing. )

For the final slicing process, I used the Anycubic Photon Workshop slicer.

Within the software, I was able to orient the individual parts of the glasses on the build platform & adjust the printing parameters according to the material & desired precision.

I used a white ABS-like resin from Elegoo, chosen for its durability, smooth surface quality, & fine detail resolution, making it well-suited for small & precise components such as eyewear frames.

In addition, I configured the support structures, selecting both the type & density to ensure optimal adhesion & stability during the printing process.

RESIN PRINT

First Print Attempt

For the first print, I selected the “light” support setting, consisting of many thin branches designed to hold the model in place while leaving as few surface marks as possible.

However, it soon became apparent that the structure was too fragile to support the weight of the model. During the printing process, the part detached from the supports, & the print had to be aborted & restarted.

Unfortunately, the second attempt also showed printing irregularities, possibly caused by impurities in the resin.

To address this issue, I plan to clean the printer thoroughly and filter the resin before initiating the next print.

Wish me luck!!

...

( Of course, I also applied the Grasshopper node setup to my second eyewear model... )