Benzema declared war on Deschamps? Hint at self-exposure and quitting the team.

In an interview with Le Parisien, French coach Deschamps said that it was Benzema’s own decision to quit FIFA World Cup Qatar 2022. Benzema then responded to Deschamps’ "I really dare to say (panic)" in social media. On Saturday, Benzema was not selected for the Real Madrid game due to injury and stayed in the gym to recover alone. He wrote in social media: "We don’t have much choice." With a picture of pretending to cry, it seems to mean something. On another platform, Benzema hinted again that he would personally tell the story of his withdrawal from the World Cup. He wrote: "I will explain myself to people." Next, the French team’s melon is worth looking forward to.

Nanning football teenagers run in the sun to welcome the Youth League.

This event is one of the few youth football events in Nanning since the epidemic. Yesterday morning, after a brief opening ceremony, five teams fought each other and competed in a single round robin system. It is worth mentioning that the competition is not divided by gender, and two women’s teams, Nanning Sports School and Nanning No.17 Middle School, also participated in it, competing with the other three men’s teams and putting on a drama. In the competition, the 11-a-side standard football field gave the athletes room to run. Sprint and grab, break through with the ball and go straight to Huanglong … There are wonderful confrontations on the field from time to time.

The league hits hard! Meme connected with Paris in stoppage time and scored 13 goals in 4 matches in the Champions League knockout cycle.

Ligue 1 ended a focus battle early this morning, with Ligue 1 overlord Paris Saint-Germain beating brest 2+1 away and winning four consecutive victories.

Paris has just been eliminated by Bayern with a total score of 0-3 in the Champions League in mid-week, and missed the quarter-finals for two consecutive years. However, when they returned to the familiar Ligue 1 stadium, the Paris soldiers quickly adjusted back. In this campaign, Mbappé and Messi started together, with the former looking forward to returning to the top of the scorer and the latter hitting the 800th goal of his career.

In the 11th minute of the first half, Messi fell to the ground after hitting the ball in the restricted area. Sohler followed up with a volley and hit the post. Messi got up again to make up the shot, and the ball flew high over the goal.

In the 32nd minute, Ramos won a free kick for Paris, but Messi made a low shot but was blocked by the unresponsive Virathit.

In the 37th minute, Paris, which had been unable to attack for a long time, finally took the lead. Mbappé’s sudden stab in the back was saved by Bizot, and Sohler quickly followed up to make up the shot.

However, the leading edge of Greater Paris only lasted for 6 minutes. In the 43rd minute, Del Castillo got the ball in the backcourt and sent a long oblique pass. After Aunola grabbed two defenders, he pushed into the penalty area and scored a goal. brest equalized the score 1-1!

The second half became the stage for brest goalkeeper Bizot. In the 65th minute, Sohler returned. Messi caught the ball and leaned against the defense, then pushed it straight to the dead corner, but it was magically resolved by Bizot. In the 69th minute, Nuno Mendes’ small angle shot was blocked by Bizot. In the 74th minute, Mbappé’s long-range shot was confiscated by Bizot.

In the 90th minute, Paris finally took the lead. Messi kept the ball in the backcourt and passed the ball to the frontcourt. Mbappé used the speed to throw off the defense and then passed the goalkeeper to push the empty net. Paris finished the lore 2-1! With this goal, Mbappé tied David in the number of goals and returned to the top of the scorer list.

After winning this game, Paris is 11 points ahead of the second place in one more game, which is infinitely close to the league championship. During the 3-0 defeat to Bayern in two rounds, Paris has won four consecutive victories in the league, and they scored 13 goals. Although they failed to go any further in the Champions League, they are still the undisputed kings in the League.

Application of AI Algorithm in Big Data Governance

guide readingThis paper mainly shares the application experience of Datacake and AI algorithm in big data governance. This sharing is divided into five parts. The first part clarifies the relationship between big data and AI. Big data can not only serve AI, but also use AI to optimize its own services. The two are mutually supportive and dependent. The second part introduces the application practice of comprehensive evaluation of big data task health by using AI model, which provides quantitative basis for subsequent data governance; The third part introduces the application practice of using AI model to intelligently recommend the configuration of Spark task operation parameters, and realizes the goal of improving the utilization rate of cloud resources. The fourth part introduces the practice of recommending task execution engine by model intelligence in SQL query scenario; The fifth part looks forward to the application scenarios of AI in the whole life cycle of big data.

Full-text catalog:

1. Big data and AI

2. Health assessment of big data tasks

3. Spark task intelligent parameter adjustment

4. Intelligent selection of SQL task execution engine

5. The application prospect of AI algorithm in big data governance.

Sharing guests | Li Weimin Happy Eggplant algorithm engineer

Edit | |Charles

Production community | |DataFun

01

Big data and AI

It is generally believed that cloud computing collects and stores massive data, thus forming big data; Then, through the mining and learning of big data, the AI model is further formed. This concept acquiesces that big data serves AI, but ignores the fact that AI algorithms can also feed back big data, and there is a two-way, mutual support and dependence relationship between them.

The whole life cycle of big data can be divided into six stages, and each stage faces some problems. Proper use of AI algorithm is helpful to solve these problems.

Data acquisition:This stage will pay more attention to the quality, frequency and security of data collection, such as whether the collected data is complete, whether the speed of data collection is too fast or too slow, whether the collected data has been desensitized or encrypted, etc. At this time, AI can play some roles, such as evaluating the rationality of log collection based on similar applications, and using anomaly detection algorithms to find the sudden increase or decrease of data volume.

Data transmission:This stage pays more attention to the availability, integrity and security of data, and AI algorithm can be used to do some fault diagnosis and intrusion detection.

Data storage:At this stage, we pay more attention to whether the storage structure of data is reasonable, whether the resource occupation is low enough, whether it is safe enough, etc., and we can also use AI algorithm to do some evaluation and optimization.

Data processing:This stage is the most obvious stage that affects and optimizes the benefits. Its core problem is to improve the efficiency of data processing and reduce the consumption of resources. AI can be optimized from multiple starting points.

Data exchange:There is more and more cooperation between enterprises, which will involve the security of data. Algorithms can also be applied in this respect. For example, the popular federated learning can help to share data better and more safely.

Data destruction:Data can’t just be saved and not deleted, so we need to consider when we can delete data and whether it is risky. On the basis of business rules, AI algorithm can assist in judging the timing of deleting data and its associated impact.

Overall, data lifecycle management has three goals:High efficiency and low cost,andsafe. In the past, we relied on experts’ experience to formulate some rules and strategies, which had obvious disadvantages, high cost and low efficiency. Proper use of AI algorithm can avoid these drawbacks and feed back into the construction of big data basic services.

02

Health Assessment of Big Data Tasks

In eggplant technology, several application scenarios that have already landed are first of all the evaluation of the health of big data tasks.

On the big data platform, thousands of tasks are running every day. However, many tasks only stay in the stage of correct output, and no attention is paid to the time-consuming operation and resource consumption of tasks, which leads to inefficiency and waste of resources in many tasks.

Even if some data developers begin to pay attention to task health, it is difficult to accurately evaluate whether the task is healthy or not. Because there are many indicators related to tasks, such as failure rate, time consumption and resource consumption, and there are natural differences in the complexity of different tasks and the volume of data processed, it is obviously unreasonable to simply choose the absolute value of an indicator as the evaluation standard.

Without quantitative task health, it is difficult to determine which tasks are unhealthy and need to be treated, let alone where the problem lies and where to start treatment. Even after treatment, we don’t know how effective it is, and even some indicators improve but others deteriorate.

Demand:Faced with the above problems, we urgently need a quantitative index to accurately reflect the comprehensive health status of the task. The way of making rules manually is inefficient and incomplete, so the power of machine learning model is considered. The goal is that the model can give the quantitative score of the task and its position in the global distribution, and give the main problems and solutions of the task.

To meet this demand, our functional module scheme is to display the key information of all tasks under the owner’s name in the management interface, such as score, task cost, CPU utilization, memory utilization and so on. In this way, the health of the task is clear at a glance, which is convenient for the task owner to do the task management in the future.

Secondly, the model scheme of scoring function is treated as a classification problem. Intuitively, task scoring is obviously a regression problem, and it should be an arbitrary real number between 0 and 100. But in this case, it requires enough samples with scores, and manual labeling is costly and unreliable.

Therefore, we consider transforming the problem into a classification problem, and the classification probability given by the classification model can be further mapped into a real number score. We divide tasks into two categories: good task 1 and bad task 0, which are marked by big data engineers. The so-called good task usually refers to a task that takes short time and consumes less resources under the same task amount and complexity.

The model training process is as follows:

The first is sample preparation. Our samples come from historical task data, and the sample characteristics include running time, resources used, whether execution failed, etc. The sample labels are marked as good and bad by big data engineers according to rules or experience. Then we can train the model. We have tried LR, GBDT, XGboost and other models. Both theory and practice prove that XGboost has better classification effect. The model will eventually output the probability that the task is a "good task". The greater the probability, the higher the final mapped task score will be.

After training, 19 features are selected from the initial nearly 50 original features, which can basically determine whether a task is a good task. For example, for tasks with many failures and tasks with low resource utilization, most of the scores will not be too high, which is basically consistent with the subjective feelings of labor.

After using the model to score tasks, we can see that tasks below 0 to 30 belong to unhealthy tasks that need to be managed urgently; Between 30 and 60 are tasks with acceptable health; Those with a score of 60 or above are tasks with good health and need to maintain the status quo. In this way, with quantitative indicators, the task owner can be guided to actively manage some tasks, thus achieving the goal of reducing costs and increasing efficiency.

After the application of the model, it brought usThe following benefits:

First of all, the task owner can know the health of the tasks under his name, and can know whether the tasks need to be managed through scores and rankings;

(2) Quantitative indicators provide a basis for the follow-up task governance;

(3) How much profit and how much improvement have been achieved after the completion of task governance can also be quantitatively demonstrated through scores.

03

Spark task intelligent parameter adjustment

The second application scenario is the intelligent parameter adjustment of Spark task. A survey by Gartner reveals that 70% of cloud resources consumed by cloud users are unnecessarily wasted. When applying for cloud resources, many people may apply for more resources in order to ensure the successful implementation of the task, which will cause unnecessary waste. There are still many people who use the default configuration when creating tasks, but this is actually not the optimal configuration. If it can be carefully configured, it can achieve very good results, which can not only ensure the operation efficiency, but also ensure the success of the operation, and at the same time save a lot of resources. However, task parameter configuration has high requirements for users. In addition to understanding the meaning of configuration items, it is also necessary to consider the correlation influence between configuration items. Even relying on expert experience, it is difficult to achieve optimization, and the strategy of rule class is difficult to adjust dynamically.

This puts forward a demand, hoping that the model can intelligently recommend the optimal parameter configuration for task operation, so as to improve the utilization rate of task cloud resources while keeping the original running time of the task unchanged.

For the task parameter adjustment function module, our design scheme includes two situations: the first one is that the model should be able to recommend the most suitable configuration parameters according to the historical operation of the task; In the second case, the model should be able to give a reasonable configuration through the analysis of the tasks for which the users are not online.

The next step is to train the model. First, we must determine the output target of the model. There are more than 300 configurable items, and it is impossible to give them all by the model. After testing and investigation, we chose three parameters that have the greatest influence on the task performance, namelyCores core number of executorTotal memoryNumber of instances instances. Each configuration item has its default value and adjustable range. In fact, given a parameter space, the model only needs to find the optimal solution in this space.

In the training stage, there are two schemes to carry out. Option one isLearning experience ruleIn the early stage, the parameters were recommended by rules, and the effect was good after online, so let the model learn this set of rules first, so as to achieve the goal of online quickly. The model training sample is more than 70,000 task configurations previously calculated according to the rules. The sample features the historical operation data of tasks (such as the amount of data processed by tasks, the amount of resources used, the time consumed by tasks, etc.) and some statistical information (such as the average consumption and the maximum consumption in the past seven days, etc.).

We chose the basic model.Multiple regression model with multiple dependent variables. The common regression model is single output, with many independent variables but only one dependent variable. Here we hope to output three parameters, so we adopt a multiple regression model with multiple dependent variables, and its essence is still an LR model.

The above picture shows the theoretical basis of this model. On the left is a multi-label, that is, three configuration items, β is the coefficient of each feature and σ is the error. The training method is the same as unitary regression, and the least square method is used to estimate the sum of squares of all elements in σ.

The advantage of option one is thatYou can learn the rules quickly, and the cost is relatively small.. The drawback is thatIts optimization upper limit can achieve the same good effect as the rule at most, but it will be more difficult to exceed it.

The second scheme is Bayesian optimization, which is similar to reinforcement learning, and tries to find the optimal configuration in parameter space. Bayesian framework is adopted here, because it can make use of the basis of the last attempt, and it will have some transcendental experience in the next attempt, so that it can quickly find a better position. The whole training process will be carried out in a parameter space, and a configuration will be randomly sampled for verification and then run; After the operation, we will pay attention to some indicators, such as utilization rate and cost, to judge whether it is optimal; Then repeat the above steps until the tuning is completed. After the model is trained, there is also a tricky process in the use process. If there is a certain similarity between the new task and the historical task, there is no need to calculate the configuration again, and the previous optimal configuration can be adopted directly.

After the trial and practice of these two schemes, we can see that certain effects have been achieved. For the existing tasks, after modification according to the configuration parameters recommended by the model, more than 80% of the tasks can improve the resource utilization rate by about 15%, and the resource utilization rate of some tasks is even doubled. But both schemes actually exist.defectThe regression model of learning rules has a lower upper limit of optimization; The disadvantage of Bayesian optimization model for global optimization is that it is too expensive to make various attempts.

The future exploration directions are as follows:

Semantic analysis:Spark semantics is rich, including different code structures and operator functions, which are closely related to task parameter configuration and resource consumption. But at present, we only use the historical operation of the task, ignoring the Spark semantics itself, which is a waste of information. The next thing to do is to penetrate into the code level, analyze the operator functions contained in the Spark task, and make more fine-grained tuning accordingly.

Classification tuning:Spark has many application scenarios, such as pure analysis, development, processing, etc. The tuning space and objectives of different scenarios are also different, so it is necessary to do classification tuning.

Engineering optimization:One of the difficulties encountered in practice is that there are few samples and the test cost is high, which requires the cooperation of relevant parties to optimize the project or process.

04

Intelligent selection of SQL task execution engine

The third application scenario is the intelligent choice of SQL query task execution engine.

Background:

(1)SQL query platform is a big data product that most users have the most contact with and the most obvious experience. No matter data analysts, R&D or product managers, they write a lot of SQL every day to get the data they want;

(2) When many people run SQL tasks, they don’t pay attention to the underlying execution engine. For example, Presto is based on pure memory calculation. In some simple query scenarios, its advantage is that the execution speed will be faster, but its disadvantage is that if the storage capacity is not enough, it will be directly hung up; In contrast, Spark is more suitable for executing complex scenes with a large amount of data. Even if oom appears, it will use disk storage, thus avoiding the failure of the task. Therefore, different engines are suitable for different task scenarios.

(3) The effect of 3)SQL query should comprehensively consider the execution time of the task and the consumption of resources, neither can it excessively pursue the query speed without considering the consumption of resources, nor can it affect the query efficiency in order to save resources.

(4) There are three traditional engine selection methods in the industry, namely RBO, CBO and HBO.RBO It is a rule-based optimizer, which is difficult to make rules and has low update frequency.CBO Is based on cost optimization, too much pursuit of cost optimization may lead to the failure of task execution;HBO It is an optimizer based on historical task operation, which is limited to historical data.

In the design of the function module, after the user writes the SQL statement and submits it for execution, the model will automatically judge which engine to use and pop up a window to prompt, and the user will finally decide whether to use the recommended engine for execution.

The overall scheme of the model is to recommend the execution engine based on the SQL statement itself. Because you can see what tables and functions are used from SQL itself, this information directly determines the complexity of SQL, thus affecting the choice of execution engine. Model training samples come from SQL statements run in history, and model labels are marked according to historical execution. For example, tasks with long task execution and huge data volume will be marked as suitable for running on Spark, and the rest are SQL suitable for running on Presto. NLP technology and N-gram plus TF-IDF method are used to extract sample features. The general principle is to extract phrases to see their frequency in sentences, so that keyword groups can be extracted. The vector features generated after this operation are very large. We first select 3000 features by linear model, and then train XGBoost model as the final prediction model.

After training, we can see that the accuracy of the model prediction is still relatively high, about 90% or more.

The online application process of the final model is: after the user submits SQL, the model recommends the execution engine. If it is different from the engine originally selected by the user, the language conversion module will be called to complete the conversion of SQL statements. If the execution fails after switching engines, we will have a failover mechanism to switch back to the user’s original engine to ensure the success of task execution.

The benefit of this practice is that the model can automatically select the most suitable execution engine, and complete the subsequent sentence transformation, without the need for users to do additional learning.

In addition, the engine recommended by the model can basically keep the original execution efficiency unchanged, while reducing the failure rate, so the overall user experience will increase.

Finally, due to the reduction of the unnecessary use of high-cost engines and the decline in the failure rate of task execution, the overall resource cost consumption has decreased.

From the second part to the fourth part, we shared three applications of AI algorithm on big data platform. One of the characteristics that can be seen is thatThe algorithm used is not particularly complicated, but the effect will be very obvious.This inspires us to take the initiative to understand the pain points or optimization space of the big data platform during its operation. After determining the application scenario, we can try to use different machine learning methods to solve these problems, so as to realize the feedback of AI algorithm to big data.

05

Application Prospect of AI Algorithm in Big Data Governance

Finally, we look forward to the application scenario of AI algorithm in big data governance.

The three application scenarios described above focus on the data processing stage. In fact, echoing the relationship between AI and big data in the first chapter, AI can play a better role in the whole data life cycle.

For example, in the data acquisition stage, whether the log is reasonable can be judged; Can do intrusion detection when transmitting; When processing, it can further reduce costs and increase efficiency; Do some work to ensure data security when exchanging; When destroying, we can judge the timing and related influence of destruction. There are many application scenarios of AI in the big data platform, and here it is just a brick to attract jade. It is believed that the mutual support relationship between AI and big data will be more prominent in the future. AI assists big data platforms to collect and process data better, and better data quality can help train better AI models, thus achieving a virtuous circle.

06

Question and answer session

Q1: What kind of rule engine is used? Is it open source?

A1: The so-called parameter tuning rules here are formulated by our big data colleagues based on the experience of manual tuning in the early stage, such as how many minutes the execution time of the task exceeds, or how much data is processed, and how many cores or memory are recommended for the task. This is a set of rules that have been accumulated for a long time, and the effect is better after going online, so we use this set of rules to train our parameter recommendation model.

Q2: Is the dependent variable only the adjustment of parameters? Have you considered the influence of the performance instability of the big data platform on the calculation results?

A2: When making parameter recommendation, we don’t just pursue low cost, otherwise the recommended resources will be low and the task will fail. It is true that the dependent variable only has parameter adjustment, but in order to prevent instability, we have added additional restrictions. First of all, the model features, we choose the average value of a certain period of time rather than the value of an isolated day; Secondly, for the parameters recommended by the model, we will compare the differences between them and the actual configuration values. If the differences are too large, we will adopt the strategy of slow rise and slow down to avoid the failure of the task caused by excessive one-time adjustment.

Q3: Are regression model and Bayesian model used at the same time?

A3: No. Just now, we talked about doing parameter recommendation, and we have used two schemes: learning rules uses regression model; Then the Bayesian optimization framework is used. They are not used at the same time. We have made two attempts. The advantage of the former learning rule is that it can quickly use historical past experience; The second model can find a better or even optimal configuration on the basis of the previous one. The two of them belong to a sequential or progressive relationship, rather than being used at the same time.

Q4: Is the introduction of semantic analysis considered from expanding more features?

A4: Yes. As mentioned just now, the information we use when doing Spark tuning is only its historical implementation, but we haven’t paid attention to the Spark task itself yet. Spark itself actually contains a lot of information, including various operators and stages. If we don’t analyze its semantics, we will lose a lot of information. So our next plan is to analyze the semantics of Spark task and expand more features to assist parameter calculation.

Q5: Will parameter recommendation be unreasonable, which will lead to abnormal or even failed tasks? Then how to reduce abnormal task error and task fluctuation in such a scenario?

A5: If we completely rely on the model, it is possible that it pursues to improve the utilization rate of resources as high as possible. At this time, the recommended parameters may be more radical, such as the memory shrinking from 30g to 5g at once. Therefore, in addition to the model recommendation, we will add additional restrictions, such as how many g the parameter adjustment span can’t exceed, that is, the slow-rising and slow-falling strategy.

Q6: Sigmoid 2022 has some articles related to parameter tuning. Are there any references?

A6: Task intelligent parameter tuning is still a hot research direction, and teams in different fields have adopted different methods and models. Before we started, we investigated many industry methods, including the sigmoid 2022 paper you mentioned. After comparison and practice, we finally tried the two schemes we shared. We will continue to pay attention to the latest progress in this direction and try more methods to improve the recommendation effect.

That’s all for today’s sharing. Thank you.

| Share guests |

| |DataFun New Media Matrix |

| About DataFun| |

Focus on the sharing and communication of big data and artificial intelligence technology applications. Founded in 2017, more than 100+ offline and 100+ online salons, forums and summits have been held in Beijing, Shanghai, Shenzhen, Hangzhou and other cities, and more than 2,000 experts and scholars have been invited to participate in the sharing. Its WeChat official account DataFunTalk has accumulated 900+ original articles, one million+readings and 160,000+accurate fans.

What will be wonderful when the AI chat robot meets the meta-universe?

Please write a paper about the metauniverse with me.
In today’s society, with the development of science and technology and the popularity of the Internet, people’s demand for virtual worlds is growing. As a concept of virtual world, metauniverse has attracted extensive attention and research. Metauniverse is a virtual space that integrates various virtual reality technologies and applications. It allows users to carry out various activities in it, such as socializing, games, education, business and so on. Different from traditional virtual reality, Metauniverse is an open and sustainable virtual world, which can accommodate countless users and applications, and can realize cross-platform and cross-device interconnection. The application prospect of metauniverse is very broad. Socially, it allows users to meet new friends and share life and experience in the virtual world. In terms of games, it can provide a more realistic game experience and let players immerse themselves in it; In education, it can provide a more vivid and intuitive learning experience, so that students can better understand knowledge; In business, it can provide enterprises with brand-new marketing channels and business models and bring more business opportunities. However, the development of the metauniverse also faces some challenges. First of all, technical limitations may affect its development speed and application scope; Secondly, the security and privacy issues in the virtual world need to be fully valued and guaranteed;
Thirdly, the connection and coordination between the virtual world and the real world need to be further improved and optimized. Generally speaking, the metauniverse is a virtual world with great potential, which will bring us more opportunities and challenges. We need to constantly promote technological innovation and application exploration in order to better realize the development and application of the meta-universe.
What will human be like when the metauniverse is realized in the future?
If the meta-universe is truly realized, then human life will undergo great changes, and we will enter a brand-new virtual world and blend with the real world. First of all, the metauniverse will become an important part of people’s lives. In the meta-universe, people can do all kinds of activities, such as socializing, games, education, business and so on. People can establish their own virtual identities and social networks, and interact and communicate with other users. At the same time, the metauniverse will also become an important place for people to study and work, providing people with more flexible and diverse ways of studying and working. Secondly, the metauniverse will change people’s perception and cognition. In the meta-universe, people can experience more vivid and intuitive virtual reality, which will have a far-reaching impact on people’s perception and cognition. People can explore the unknown through the meta-universe, expand their cognitive boundaries and further expand their thinking and imagination. Finally, the metauniverse will bring more opportunities and challenges to people. In the metauniverse, people can create their own virtual worlds and applications to provide services and value for other users. At the same time, the metauniverse will also face some challenges, such as technical restrictions, security and privacy issues, which require us to constantly promote technological innovation and application exploration in order to better realize the development and application of the metauniverse. Generally speaking, the metauniverse will bring great changes and opportunities to mankind, and we need to explore and innovate constantly in order to better adapt to the arrival of this new era.
Which industry do you think is more likely to be the first to enter the meta-universe project?
At present, the game industry is more likely to take the lead in entering the meta-universe project. Because the game industry has accumulated rich experience and technology in virtual reality and augmented reality technology, and the game itself is a virtual world, which can provide a good foundation and support for the development of the meta-universe. In addition, the game industry is also a very open and innovative industry with strong technical strength and innovative ability. Many game companies have begun to explore the application of metauniverse, such as Epic Games’ "Unreal Engine" engine and Roblox’s virtual world platform, etc. These companies have the conditions and capabilities to enter the metauniverse project. Of course, in addition to the game industry, other industries may also take the lead in entering metacosmic projects, such as social media, education, medical care and finance. However, no matter which industry, it is necessary to fully consider the issues of technology, security, privacy and so on, in order to better realize the development and application of the meta-universe.
Can you imagine what the meta-universe class mode in the education industry will be like?
When the education industry enters the meta-universe, we can imagine a brand-new class mode. In the meta-universe, students and teachers can interact through virtual reality technology to create a more realistic and intuitive learning environment. First of all, students can create their own virtual identities in the metauniverse and interact and communicate with other students and teachers. They can have classes in the virtual classroom, have face-to-face communication and interaction with teachers, or participate in virtual lectures, seminars and other activities. Secondly, the learning environment in the metauniverse can be customized according to different disciplines and courses. For example, when studying history courses, students can enter a virtual historical scene, such as ancient Roman cities and Egyptian pyramids, to experience historical culture and events; When studying science courses, students can enter a virtual laboratory and conduct various experimental operations and observations. Finally, the learning content in the metauniverse can be more diversified and personalized. Students can simulate practice and experience through virtual reality technology, so as to better understand and master knowledge; At the same time, teachers can also provide students with more personalized learning content and support according to their interests and needs. Generally speaking, the entry of the education industry into the meta-universe will bring more open, innovative and diversified learning methods and experiences to students and teachers, which will have a far-reaching impact on the future development of education.
Do you think the metauniverse can really be realized? When will mankind enter the real metacosmic era?
Metauniverse is a very advanced and complex virtual world, which needs to be realized based on advanced technology and equipment. At present, technologies such as virtual reality, augmented reality, blockchain and artificial intelligence have been constantly developing and innovating, which provides a good technical foundation and support for the realization of the meta-universe. However, the realization of metauniverse still needs to overcome some problems in technology, security and privacy. For example, how to protect users’ privacy and data security, how to avoid virtual reality dizziness and other issues, all of which need technical and legal support and solutions. At present, many companies and organizations have begun to explore the application of metauniverse, such as Horizon Workrooms of Facebook and Unreal Engine of Epic Games, and these projects have made some progress. It is predicted that in the next few years, the metauniverse will gradually become an important technical and commercial field, and mankind will gradually enter the real metauniverse era. Generally speaking, the realization of the meta-universe needs support and efforts in many aspects, but with the continuous development and innovation of technology, we believe that the realization of the meta-universe is possible and will bring more open, innovative and diversified virtual worlds to mankind.
What role do you think the development of AI will play in realizing the metacosmic era?
AI is one of the key technologies to realize the meta-universe, which plays a very important role in realizing the meta-universe era.
The following are the specific functions of several aspects:
1. Intelligent interaction: AI technology can provide users in the metauniverse with more intelligent and natural interaction methods. For example, technologies such as speech recognition and natural language processing can make users interact with the metauniverse more conveniently.
2. Intelligent management: AI technology can provide more intelligent and automatic management methods for managers in the metauniverse, such as automatic security monitoring and virtual economy management, which can greatly improve the management efficiency and security of the metauniverse.
3. Intelligent content generation: AI technology can provide more intelligent and automatic content generation methods for content creators in the metauniverse, such as automatically generating virtual scenes and virtual characters, which can greatly improve the efficiency and quality of content output in the metauniverse.
4. Intelligent recommendation: AI technology can recommend more personalized content and services to users according to their interests and behavior habits, thus improving users’ satisfaction and experience.
Generally speaking, AI technology will play a very important role in the realization of the meta-universe, which can provide more intelligent, natural and personalized services and experiences for the meta-universe, thus greatly improving the value and attractiveness of the meta-universe.
Can AI make virtual people produce real human-like wisdom?
Although the current AI technology is very advanced, it still cannot fully simulate the wisdom of the human brain, especially in terms of emotion, creativity and intuition. However, with the continuous development and innovation of AI technology, it is possible to realize that virtual people can generate wisdom similar to human brain in the future. At present, some researchers are exploring the use of deep learning, neural networks and other technologies to simulate the wisdom of the human brain. For example, using deep learning technology can make virtual people learn and adapt to the environment automatically, and using neural network technology can make virtual people make complex decisions and reasoning.
At the same time, some researchers are exploring ways to combine human brain with computer, such as transforming human brain signals into computer instructions through brain-computer interface technology, so as to realize more natural and efficient interaction and control. Generally speaking, although the current AI technology can’t fully simulate the wisdom of the human brain, with the continuous development and innovation of technology, we believe that it is possible to realize the virtual human to generate wisdom similar to the human brain in the future.
Do you think AI will accelerate the evolution of the human brain?
The development of AI technology can indeed have an impact on human evolution, but whether it will accelerate the evolution of the human brain needs more research and discussion. On the one hand, AI technology can provide more intelligent and efficient tools and services for human beings, thus making human beings perform better in various fields. For example, in the fields of medical care, education, finance, etc., AI technology has been widely used, which can help human beings better understand and deal with various problems, thus promoting the development and progress of human society. On the other hand, AI technology may also have a negative impact on human evolution. For example, over-reliance on AI technology may weaken human intelligence and creativity, and may even cause human beings to lose their ability to adapt to the natural environment.
In addition, the continuous development of AI technology may also cause some moral and ethical problems, which require us to deal with them more cautiously. Therefore, while making full use of AI technology, we should be vigilant and actively explore how to realize human evolution and development with the help of AI technology.
Your answer is very wise and logical, thank you!
You’re welcome. I will try my best to provide you with accurate and useful information and help. Please feel free to let me know if you have any other questions or need help.

Interview | Zhou Dong, founder and CEO of Ruihe Cloud Map: Anchoring SASE technology and empowering cloud security 2.0.

Change contains opportunities, and ups and downs will make a good tree. With the continuous penetration of cloud computing, cloud security 1.0 is evolving to cloud security 2.0, and security technologies are also iterating.

According to Zhou Dong, founder and CEO of Ruihe Cloud Map, in the era of cloud security 1.0, after the business goes to the cloud, the whole IT structure system is different from the original, the physical boundary is no longer clear, and the traditional way of stacking security hardware is no longer feasible. Therefore, it is necessary to build a proprietary cloud security system.

In the era of cloud security 2.0, a set of operational security system is established with cloud security technology, and a cloud-based security service platform is provided, so that customers can obtain on demand and charge on demand just like using water and electricity to achieve safe landing.

Zhou Dong, founder and CEO of Ruihe Yuntu

Established in December 2014, Ruihe Cloud Map, its cloud security management platform, cloud security wholly-owned source pool and threat situation awareness as the core cloud security overall solution,Empowering cloud security 2.0 with SASE technology (secure access service edge) solves a series of problems such as unknowable, invisible and unmanageable in cloud security in one stop.And began large-scale commercialization in the fields of operators, governments and central enterprises.

Cloud security is the next big wave.

Looking back on my previous working experience,I studied technology in the first 10 years, engaged in sales in the last 10 years, and delved into the field of cloud security in recent years, which may be the best annotation for Zhou Dong.

As a post-70s student, Zhou Dong passed the pattern recognition and artificial intelligence major in Tsinghua University in his early years. During his college years, he benefited from the style of study, and with a rigorous and persistent attitude, he had deep thoughts on many issues and tended to explore the essence of things.

From an academic point of view, he said that the core of artificial intelligence is to extract the characteristics of things and recognize them. In the early 1990s, artificial neural network technology began to rise, but its application effect was not very satisfactory. With the advent of deep learning algorithms, the market of artificial intelligence gradually opened up.

Zhou Dong, who has personally experienced the Internet wave, is always thinking about what is the big wave after the Internet.Around 2008, he basically determined that "cloud" was a big wave behind him and chose to join a security company.

Adhering to the spirit of seeking truth from facts and never giving up, Zhou Dong will never relax once he looks at one thing, and will always aim at this direction.

In 2012, he realized that he had to keep up with the next wave of "cloud security". After several considerations, he decided to seize the opportunity, conform to the development trend, and began to create the Ruihe cloud map with cloud security as the core technology.

In his view,To be a good security enterprise, we must have our own system and methodology, and find the balance between core value points and customer satisfaction points, so as to truly make cloud security applications land.

Break the fuzzy boundary

Zhou Dong emphasized that the cloud is not a unique technology, but an IT infrastructure.In the future cloud security, the next development system must be based on native and convergence, providing users with the security protection of network layer, data layer and even various applications.

Although the technology of security circle is constantly developing and new technologies emerge one after another, the underlying security logic has not changed. For example, zero trust, data security, trusted computing technology,They have a lot of consistency in essence, and they are all a cryptography system based on identity authentication, authorization and encryption and decryption.Therefore, what is changing is the IT infrastructure and various new business application modes, and what remains unchanged is the underlying security logic and systematic integration protection thought.

What customers pursueThe ultimate goal is to require service providers to provide the security capabilities and package solutions they need.Therefore, for security vendors, it is the shortest way to integrate these technical capabilities into a system and do a good job of comprehensive protection against various network attacks. Therefore, Zhou Dong has adopted SASE-based services, integrating various security capabilities, such as the firewall for border protection, the encryption of identity rights for data security, and so on.

In essence, the customer’s physical environment is an IT system, but on the cloud, the security problem is the same, that is, the boundary is not clear.

The first dimension is the technological boundary.In the cloud system, the core point is to break the boundary ambiguity problem of IT system and re-equip the security service capability defined by software to disappear the boundary;The second dimension is the responsibility boundaryFor cloud service providers, cloud tenants and third-party operation and maintenance service providers, an operational and measurable system is formulated to divide their respective responsibility boundaries.

Starting from these two dimensions, Zhou Dong has built a unique security system of Ruihe Cloud Map.

"Cloud+Network+Security" Integrated Management Platform

Ruihe cloud map is located inTechnical System of "Cloud Security Management Platform+Security Resource Pool +NFV Security Network Element"Regarding the design path behind it, Zhou Dong said that it is not a single security product, but a systematic structure, which needs the integration of cloud, network and security to form a unified management platform.

To build a security resource pool, it is necessary to integrate a large number of complex network technologies, because security has changed the business direction of users, and it is necessary to find out the virtual network security boundary and assign relevant security capabilities.

From the operational point of view, there are three roles in the cloud security management platform: first, the whole platform needs to operate and maintain the security resource pool; The second is the security configuration of user access; The third is user service, which opens up the whole logical process.

Zhou Dong said,Because this system requires "live ammunition", the core technologies are mainly reflected in the following points:
First, the high-performance bottom layer, this system is built on the common server and exchange mechanism, can carry hundreds of gigabytes of traffic, and build a cluster with multiple servers to form a very large system.Can withstand the carrier-level PB-level traffic pressure. At the same time, high performance actually drives the ultimate cost performance.
Secondly, the stability of the system under huge and complicated pressure,The system needs to ensure 99.999% of the stability of this telecom level.For example, the redundancy design of the architecture and the closed-loop processing design after various problems appear are the key to realize the high reliability of the system.

Third, flexible adaptability,Hardware equipment that meets the requirements of the country’s innovation has excellent compatibility with third parties.Provide customers with a resource pool of various security capabilities that can be easily expanded and customized on demand.

Zhou Dong finally said that the next 5 to 10 years will be a golden period for cloud security to enter a peak development..At present, the layout of Ruihe cloud image in the cloud security system is basically mature, and it will grow at a rate of 50%~100% every year with the help of the east wind, combined with the needs of domestic credit creation, state secret transformation and the whole transformation to SASE service.

Software defines security and service creates value.Making security easier is the original intention of Ruihe Cloud Map and the mission of Zhou Dong.

profile

Zhou Dong, founder and CEO of Ruihe Cloud Map, holds a master’s degree in pattern recognition and artificial intelligence in Tsinghua University, is a senior cloud security expert, and is the first advocate and practitioner of the technical route of Yunan’s wholly-owned source pool.

Asean Basketball League: Hong Kong Eastern battle into play-offs final, await opponents

Hong Kong Eastern are set to enjoy an extra day off before the Asean Basketball League (ABL) play-offs final tip off on Saturday.

But head coach Zeljko Pavlicevic played down the importance of any extra rest after the 2017 champions came from behind to defeat NS Matrix 76-70 at Nguyen Du Stadium in Ho Chi Minh City on Wednesday night, winning their semi-final series 2-0 against the Malaysian side.

In the other semi-final, Saigon Heat pushed the series to a decider with an 81-79 overtime win against Singapore Slingers, with Eastern’s opponents still yet to be confirmed.

“Yes, we will have one day off, then we will know who we will play and start making preparations like we always do,” veteran coach Pavlicevic said.

https://2853f954f81a194c4c17abb9810a3cc1.safeframe.googlesyndication.com/safeframe/1-0-40/html/container.html

“These are experienced players who will be even more motivated under these kinds of situations. I do not want to say this is some kind of advantage.”

Facing a must-win game, NS Matrix – the only tournament debutants to reach the play-offs this season – came out more determined.