Home Artificial Intelligence Science, Passion, and the Way forward for Multi-Objective Optimization

Science, Passion, and the Way forward for Multi-Objective Optimization

0
Science, Passion, and the Way forward for Multi-Objective Optimization

I’d prefer to delve into your personal journey. You needed to discover an acceptable research topic to your PhD in 1996 at Tulane University. Are you able to briefly tell me the story that led you to work on evolutionary multi-objective optimization??

This can be a long story, so I’ll attempt to be temporary. After I got to Tulane for my master’s after which PhD degree in computer science, I didn’t know what topic I desired to work on. I knew I didn’t wish to do software engineering nor databases. Firstly, I attempted programming languages then robotics. Each didn’t work. Unintentionally, someday, I read a paper that used genetic algorithms to unravel a structural optimization problem. I made a decision to dedicate a course project to this paper, developed my very own genetic algorithm and wrote software for evaluation. This got me very excited, as I could now see how a genetic algorithm was in a position to produce good solutions to a fancy optimization problem relatively easily. This excitement for evolutionary algorithms has stayed my entire life.

Nonetheless, although two professors at Tulane worked with evolutionary algorithms, I made a decision to go along with a robotics professor. He didn’t know much about evolutionary computing, and neither did I, but we decided we could work together. As such, he couldn’t help me find an acceptable topic. Professor Bill Buckles, who worked with evolutionary algorithms, really useful me to work with multi-objective optimization as not many individuals had been using algorithms in that domain. After on the lookout for related papers, I discovered my PhD topic. Serendipitously, all of it got here together without being planned. I consider that many great things come together by serendipity fairly than being planned.

Are you able to elaborate on what sparked your interest in evolutionary computing?

There may be a big difference between classical optimization and using evolutionary algorithms. Classical optimization mostly relies on math and calculus, whereas evolutionary algorithms are inspired by natural phenomena. It fascinates me how nature has adapted the species in other ways, just aiming for survival, and the way this might be such a strong tool to enhance the mechanisms of a specific individual. With evolutionary algorithms, we simulate this process, albeit a rough, low-quality version of what happens in nature.

Evolutionary algorithms appear to have a simplistic framework, mirroring intricate natural phenomena, which paradoxically yields exceptional problem-solving capabilities. In my pursuit to know why it’s that they were so good, I’m still puzzled. I even have read many papers related to natural evolution. I attempted to follow up somewhat bit on findings in form of popular science magazines, not technical things.

The connection between algorithmic and natural evolution has at all times fascinated me. If circumstances permitted — the knowledge, time, and skills — I might devote the remaining of my profession to trying to know how they operate.

How has the multi-objective optimization field evolved?

Though the domain of multi-objective optimization is comparatively narrow, my journey began in an era when opportunities were abundant as a consequence of the limited variety of researchers. This allowed me to explore a various array of topics. While the landscape has evolved, I’ve observed that despite a proliferation of papers, a definite perspective continues to be lacking.

Why is this angle lacking?

Researchers are somewhat hesitant to embrace difficult problems and push the boundaries of research topics. Moreover, we struggle to supply robust explanations for our methodologies. We’re still not daring to go to difficult problems, to difficult research topics, and we’re still not in a position to explain lots of the things now we have done. We’re well-equipped with techniques for specific problems, yet we lack a deeper comprehension of those techniques’ underlying principles. Most individuals give attention to proposing, not on understanding. This realization has prompted a shift in my focus.

What role do you are taking on this development?

As I’ve matured, my priority has shifted from mere proposition to understanding. I consider that if nobody else undertakes this task, it falls upon us to accomplish that. While it’s a difficult endeavour to dissect and understand mechanisms and reasons behind algorithmic efficacy, I consider this pursuit essential for real scientific advancement. You would have only two or three methods for an issue fairly than 200. If there is no such thing as a option to classify all these methods, one cannot justify a brand new tool, and I don’t think it makes much sense to proceed on this direction. In fact, people will keep producing, and that’s effective. But when we lack understanding, I believe we are going to find yourself with a field with no future. Ultimately, my objective is to direct my efforts toward grasping existing tools before determining the necessity for novel ones.

How can we move towards more understanding of existing methods?

We must always spend more time trying to know the things we have already got. Then, we are able to assess what we actually need. We must always work based on the domain’s needs as a substitute of the will to have more publications. If we don’t have a tool that does this, then let’s work on developing it. Then, research needs to be moving more on this direction of need fairly than within the direction of manufacturing numbers.

Are these questions centered around understanding why specific algorithms work?

Well, it’s not only about why they work. The query of why certain algorithms work is undoubtedly crucial, but our inquiries shouldn’t be limited to only that. A critical aspect to delve into is how you can best match algorithms to applications. When presented with multiple algorithms, practitioners often grapple with deciding which one is perfect for a specific application, whether it’s for combinatorial or continuous optimization. The anomaly lies in discerning the perfect scenarios for every algorithm.

Today, while we shouldn’t have algorithms designed for specific tasks that don’t require further characterization, it’s equally vital to know and maybe categorize general algorithms. We must always aim to extract more details about their operation and evaluate whether they really are universally applicable or in the event that they needs to be tied to specific tasks.

Beyond algorithms, there are tools and techniques equivalent to scalarizing functions, crossover operators, mutation operators and archiving techniques. There may be a plethora of all of those. Yet, only a select few are commonly used, actually because they’ve been employed historically fairly than as a consequence of an intrinsic understanding of their efficacy. We needs to be addressing questions like: “Why use one method over one other?” It’s these broader, nuanced inquiries that our domain must give attention to.

Are you able to explain how evolutionary algorithms function in multi-objective optimization?

Evolutionary algorithms initiate with a set of solutions, normally generated randomly. These solutions initially possess low quality, but through the choice process, they steadily evolve towards the Pareto front. Nonetheless, it’s vital to notice that while a Pareto front is generated, users typically don’t require all solutions inside it. Then, just a few or just one solution is chosen. But choosing the appropriate solution on the Pareto front just isn’t optimization, but is as a substitute decision making.

With decision-making, a subset or perhaps a single solution is chosen from the Pareto front based on the user’s preferences. Determining user’s preferences might be straightforward in the event that they have a transparent trade-off in mind, but when preferences are uncertain, the algorithm generates several possibilities for users to judge and choose from. This diverges from optimization and delves into decision-making. Thus, in multi-objective optimization, there are three distinct stages: modeling, optimization, and decision-making.

I primarily give attention to the optimization aspect. Other researchers, particularly in operations research, delve into decision-making, and a few mix each. These interactive approaches involve running the optimizer for just a few iterations after which searching for user input on the specified direction, generating solutions based on the user’s preferences. These interactive methods might be effective, but crafting concise and meaningful user queries is crucial to forestall overwhelming them.

In an earlier outing, you mentioned a very powerful criterion based on which you choose PhDs is their passion. How do you assess passion?

Ideally, students are passionate but are also excellent programmers and mathematicians. Unfortunately, students with all these skills are rare, and a balance between these needs to be found. One could say it is a multi-objective optimization problem in itself. Passion weighs heavily in comparison with other traits and skills in my assessment.

Assessing passion might be intricate to define but more evident to acknowledge. After I encounter it, a form of sixth sense guides me in differentiating real passion from feigned enthusiasm. One telltale sign is students who consistently transcend the scope of assigned tasks, consistently exceeding expectations. Nonetheless, this just isn’t the only indicator. Passionate individuals exhibit an insatiable curiosity, not only asking quite a few questions on their topic but in addition independently delving into related areas. They bridge concepts, linking seemingly disparate elements to their work — a necessary trait in research which involves creative connections. For me, this means a real passion for the craft. In my experience, individuals with an innate passion are likely to exhibit an affinity for probing the depths of their topic, exploring facets beyond immediate instruction. Such students possess a research-oriented spirit, not solely searching for prescribed answers but uncovering avenues to complement their understanding.

The ultimate element involves leveraging and cultivating their skills. Even when a student excels primarily in passion, their other abilities is probably not lacking. It’s rare to seek out a student embodying every desirable trait. More often, students excel in a specific facet while maintaining proficiency in others. For example, a student might excel in passion, possess good programming skills, albeit not extraordinary, and display solid mathematical foundations. Striking a balance amongst these attributes constitutes a multi-objective problem, aiming to extract probably the most from a student based on their unique skill set.

Why is passion so vital?

I recall having just a few students who were exceptional in various features but lacked that spark of passion. The work we engaged in, consequently, felt fairly mundane and uninspiring to me. A passionate student not only strives for their very own growth but in addition reignites my enthusiasm for the subject material. They challenge me, push me deeper into the subject, and make the collaborative process more stimulating. However, a student who’s merely going through the motions, focusing just on task completion without the drive to delve deeper, doesn’t evoke the identical excitement. Such situations are likely to turn into more about ticking boxes to make sure they graduate fairly than an enriching exchange of information and concepts. Simply put, without passion, the experience becomes transactional, devoid of the vibrancy that makes academic collaboration truly rewarding.

You like making just a few priceless contributions fairly than many papers simply following a research-by-analogy approach. Since there is often little novelty in research by analogy, should this be conducted at universities?

The query raises a fundamental consideration: the objectives of universities in research endeavours. Research by analogy actually has its place — it’s crucial, and over time, it has incrementally pushed the boundaries of information in specific directions. For example, within the context of multi-objective optimization, significant progress has occurred over the past 18 years, resulting in the event of improved algorithms. This success validates the role of research by analogy.

Nonetheless, the potential downside lies in overreliance on research by analogy, which could stifle the reception of truly modern ideas. Novel ideas, when introduced, might face resistance inside a system that largely values incremental work. Consequently, a harmonious coexistence between the 2 modes of research is important. Institutions, evaluation systems, and academic journals should incentivize each. Research by analogy serves as a foundation for regular progress, while the cultivation of groundbreaking ideas drives the sector forward. The coexistence ensures that while we construct upon existing knowledge, we concurrently embrace avenues resulting in unexpected territories. A future devoid of either approach can be lower than optimal; due to this fact, fostering a balanced ecosystem ensures that the sector stays vibrant, adaptive, and poised for growth.

Do you incentivize this as well in your journal?

I do my best, nevertheless it’s difficult because it’s not solely inside my control. The final result hinges on the contributions of Associate Editors and reviewers. While I strive to not reject papers with novel ideas, it’s not at all times feasible. Unfortunately, I have to admit that encountering papers with genuinely latest concepts is becoming increasingly rare. Notably, this 12 months, I reviewed a paper for a conference featuring an exceptionally intriguing concept that captivated me. This stands as probably the most remarkable discovery I’ve encountered prior to now 15 years. Nonetheless, such occurrences will not be frequent.

Computational intelligence was historically divided into evolutionary computing, fuzzy logic, and neural networks. The last decade witnessed groundbreaking developments in neural networks, particularly transformer models. What role can evolutionary computing play on this latest landscape?

I posit that evolutionary algorithms, traditionally utilized in evolving neural architectures, have potential yet to be fully harnessed. There’s a possibility of designing robust optimizers that may seamlessly integrate with existing algorithms, like Adam, to coach neural networks. There have been just a few endeavours on this domain, equivalent to the particle swarm approach, but these efforts are primarily focused on smaller-scale problems. Nonetheless, I anticipate the emergence of more complex challenges within the years ahead.

Moreover, someone I do know firmly believes that deep learning performance might be replicated using genetic programming. The thought might be described as “deep genetic programming.” By incorporating layered trees in genetic programming, the structure would resemble that of deep learning. This can be a relatively uncharted territory, divergent from the standard neural network approach. The potential advantages? Possibly it would offer more computational efficiency and even heightened accuracy. But the true advantage stays to be explored.

While there are researchers using genetic programming for classification, it’s not a widespread application. Genetic programming has often been harnessed more for constructing heuristics, especially hyper heuristics pertinent to combinatorial optimization. I speculate the limited use for singular classification problems stems from the computational costs involved. Yet, I’m hopeful that with time and technological progression, we’ll see a shift.

In summary, evolutionary computing still has vast areas to explore, be it in augmenting neural networks or difficult them with unique methodologies. There’s ample room for coexistence and innovation.

Do you perceive the neural network focus as a trend or a structural shift as a consequence of their superior performance?

Many AI people will inform you that it’s fashionable. I’m not so sure; I believe it is a very powerful tool, and it can be difficult to outperform deep neural networks. Perhaps in 10–15 years, it could occur, but not now. Their performance is such that I find it hard to check any imminent rival that may easily outperform them, especially considering the extensive research and development invested on this space. Possibly in a decade or more, we’d witness changes, but presently, they seem unmatched.

Yet, AI just isn’t solely concerning the tasks deep learning is thought for. There are many AI challenges and domains that aren’t necessarily centered around what deep learning primarily addresses. Shifting our focus to those broader challenges might be helpful.

One vulnerability to spotlight in deep learning models is their sensitivity to ‘pixel attacks’. By tweaking only one pixel, which is commonly imperceptible to the human eye, these models might be deceived. Recently, evolutionary algorithms have been employed to execute these pixel attacks, shedding light on the frailties in neural networks. Beyond merely pinpointing these weaknesses, there’s a chance for evolutionary algorithms to boost model resilience against such vulnerabilities. This can be a promising avenue that integrates the strengths of each deep learning and evolutionary algorithms.

This marks the tip of our interview. Do you’ve gotten a final remark?

I’d prefer to stress that research, whatever the domain, holds fascinating allure for those driven by passion. Passion serves as an important ingredient for anyone dedicating their profession to research. Utilizing tools might be satisfying, but true research involves unearthing solutions to uncharted problems and forging connections between seemingly disparate elements. Cultivating interest among the many younger generation is paramount. Science consistently requires fresh minds, brimming with creativity, prepared to tackle progressively intricate challenges. Given the critical issues equivalent to climate change, pollution, and resource scarcity, science’s role in crafting sophisticated solutions becomes pivotal for our survival. Although not everyone could also be inclined to research, for those drawn to it, it’s a rewarding journey. While not a path to easy wealth, it offers immense satisfaction in solving complex problems and contributing to our understanding of the world. It’s a source of pleasure, pleasure, and accomplishment, something I’ve personally cherished throughout my journey in the sector.

This interview is conducted on behalf of the BNVKI, the Benelux Association for Artificial Intelligence. We bring together AI researchers from Belgium, The Netherlands and Luxembourg.

LEAVE A REPLY

Please enter your comment!
Please enter your name here