Posts

Showing posts from August, 2022

ARMember: Hook to Change Form Field value

How do I change the case/value of an ARMember form field before it creates the subscriber ? I can make the desired changes in $posted_data which is passed to the various hooks tried, but the changes did not persist outside of my_function. Tried global $posted_data in my_function and &$posted_data in my_function passed parameters. It must be something simple. Thanks .

How to set required to a custom select option?

I want to show message required when user don't set a choice. This is my code: var options = document.querySelectorAll('.myOptions'); var selecText = document.querySelector('.selectFeld>p'); var mylist = document.querySelector('.list_contrat'); //var iconSelect = document.querySelector(".icon_typeCont_rota"); var valueTypeContra = document.querySelector('#typecontrat'); for(option of options) { option.onclick = function() { mylist.classList.toggle('myhide'); //iconSelect.classList.toggle('myRotate'); selecText.innerHTML = this.textContent; valueTypeContra.value = this.getAttribute('data-value'); // get value select option } } <div class="selectFeld" title="Type de contrat"> <input type="text" name="typeContrat" id="typecontrat" class="d-none" required> <p>Type de contrat<...

Anchor tag with spans can not be clicked

I can not click on that anchor. I try a couple of possible solutions such as adding another anchor or div inside and outside that anchor, adding a z-index and position. edit: I've added the full code and I realize that z-index must solve the problem. However, I could not find where and how to use z-index. @import url('https://fonts.googleapis.com/css2?family=Noto+Sans+JP:wght@900&display=swap'); @import url('https://fonts.googleapis.com/css2?family=Catamaran:wght@100&display=swap'); * { scroll-behavior: smooth; text-decoration: none; } div, section { margin: 0; padding: 0; } body { margin: 0; padding: 0; background-color: #000; font-family: 'Catamaran', sans-serif; } main { pointer-events: none; height: 100vh; width: 100%; padding: 0; margin: 0; position: relative; z-index: -1; } .hi, .name { font-family: 'Noto Sans JP', sans-serif; font-size: 170px; color: #fff;...

How to trigger action on UI on adapter and reverse situation from adapter to UI?

Is it possbile to stop the flow after first collecting the data? When I debug it the first click make the order like: fromFragment private val mutableStateAdapterFlow = MutableStateFlow(-1) [...] vm.updateGoalData(value.id, updatedData) // method to updateMyGoal setGoalsAdapter.notifyDataSetChanged() I also tried without notifyDataSetChanged() , beacause of the flow it is triggered anyway. then in adapter methods in coroutine are triggered multiplte times and it changes the value to +1 and then it shows in UI previous value i.e. daysLeft where 5 then I clicked to add 1 day it goes for 6 value but this coroutine is triggered multiple times and come back to 5. addDay.setOnClickListener { onPlusButtonClickedListener( CustomSetGoalsDialogData( item.id, item.goal, item.timeGoal )...

Rust: how to assign `iter().map()` or `iter().enumarate()` to same variable

struct A {...whatever...}; const MY_CONST_USIZE:usize = 127; // somewhere in function // vec1_of_A:Vec<A> vec2_of_A_refs:Vec<&A> have values from different data sources and have different inside_item types let my_iterator; if my_rand_condition() { // my_rand_condition is random and compiles for sake of simplicity my_iterator = vec1_of_A.iter().map(|x| (MY_CONST_USIZE, &x)); // Map<Iter<Vec<A>>> } else { my_iterator = vec2_of_A_refs.iter().enumerate(); // Enumerate<Iter<Vec<&A>>> } how to make this code compile? at the end (based on condition) I would like to have iterator able build from both inputs and I don't know how to integrate these Map and Enumerate types into single variable without calling collect() to materialize iterator as Vec reading material will be welcomed

HTTPS & TCP Traffic Through AWS ALB

I'm quite new to networking, but I have been working on this problem for quite some time with no success. I have an AWS EC2 instance (Windows Server) hosting a video management web portal. The user should be able to access the web portal through their browser and view video footage (traffic is both HTTP and TCP). The issue is that I am trying to route DNS requests for the web portal through an Amazon application load balancer, forwarded to my EC2, so that I can make use of amazon's certificate manager, as I would like the webpage to be encrypted. If I access the EC2 directly (with it's IP or DNS), everything works correctly. However, when the traffic routes through the ALB, the video never loads, and I assume this is because the ALB does not pass the TCP traffic through, just the HTTP/HTTPS traffic. If I use a network load balancer to route the traffic then I am able to see the video just fine, the issue here is that there is no way to add my certificate to the NLB and en...

Python POST with nested parameters and X-XSRF-TOKEN failure

I am trying to collect data from the following URL: https://muskegon.policetocitizen.com/Inmates/Catalog . This relies on a secondary POST to https://muskegon.policetocitizen.com/api/Inmates/3 using an X-XSRF-TOKEN (which appears to be just an XSRF token, available in cookies. When I try to include the specified parameters and this token, my code is as follows: import requests from urllib.parse import urlencode r = requests.Session() res = r.get(url) cookies = res.cookies cross_ref_token = res.cookies.get("XSRF-TOKEN") payload = { "FilterOptionsParameters": { "IntersectionSearch": "true", "SearchText": "", "Parameters": [] }, "IncludeCount": "true", "PagingOptions": { "SortOptions": [], "Take": 10, "Skip": 0 ...

Run job on all existing Jenkins workers

I have a Job in a pipeline that cleans up docker images. It runs the job on each worker individually. This is frustrating because when I add jenkins-cpu-worker3, I'll have to update this job. I'd like to run this job in such a way that it runs on all workers without having to update it each time a new worker is present. I also want the job to be able to run regardless of what I name each worker. It needs to run on all workers no matter what. Is there a way to query jenkins from within the pipeline to get me a list or array of all the workers that exist. I was leafing through documentation and posts online and I have not found a solution that works. If possible I'd like to do this without any additional Jenkins Plugins. pipeline { agent any stages { stage('Cleanup jenkins-cpu-worker1') { agent { node { label 'jenkins-cpu-worker1' } } steps { sh "docker container prune -f" sh ...

Why are these 2 queries giving different outputs?

Query no 1:- SELECT COUNT(ENAME) FROM EMP WHERE JOB IN 'MANAGER' OR JOB IN 'ANALYST' AND SAL IN ( SELECT SAL + NVL (COMM,0) FROM EMP WHERE SAL LIKE '%0') GROUP BY JOB; The Query 1 gives me the following output:- COUNT(ENAME) ------------ 3 2 Query no 2:- SELECT COUNT(ENAME) FROM EMP WHERE JOB = ANY ( SELECT JOB FROM EMP WHERE JOB IN ('MANAGER', 'ANALYST') ) AND SAL IN ( SELECT SAL + NVL (COMM,0) FROM EMP WHERE SAL LIKE '%0' ) GROUP BY JOB; The Query 2 gives me the following output:- COUNT(ENAME) ------------ 2 2

Save Nested JSON in MySQL Database using Spring Boot

I want to save this nested JSON data in MYSQL DB which has JSON column using Spring Data JPA. How can I make Entity class for such data's? I don't want to establish any relationship just want to save the data by taking input and should be able to fetch it. Do I need to create new entity classes for nested objects even if I don't want to establish any relationship between them? { "Data": [ { "url": "xyz.com", "pswd": "admin", "user": "admin", "Test_Case": "T01" } ], "Page": [ { "Index": "", "Property": "", "Identifier": "", "Data_Column": "url", "Description": "", "Screenshots": "", "User_Action"...

Dynamically generated tasks in Airflow 2.2.5 are moved to "REMOVED" state and breaks down the GANTT chart

Image
Airflow Version : 2.2.5 Composer Version : 2.0.19 We have a task group which creates the tasks dynamically using for loop. Within the taskgroup we are making use of BigQueryTableDeleteOperator to delete the tables. Issue: We noticed that once the tables are deleted, all the tasks moved to REMOVED state, hence breaking the GANTT chart with error message of Task not found . Before the task run : Image 1 After the task runs: Image 2 As shown above, before the taskgroup run it shows all the tables to deleted represented by each task . In this example 2 tasks. Once the task runs into success and the table is deleted, those tasks are removed. Sharing the piece of code below : for table in tables_list: table_name = projectid + '.' + dataset + '.' + table if table not in safe_tables: delete_table_task = bigquery_table_delete_operator.BigQueryTableDeleteOperator( task_id=f"delete_tables_{table_name}", ...

Select view by cashape layer and change background colour?

How to change selected area by using CAShapeLayer and change this selected area UI view background colour. How to achieve this?

Is there a way to constrain a generic type parameter to generic types?

Is there a way to constrain a generic type parameter to generic types? //I can constrain a parameter to only object types like this type GenericType<T extends object> = keyof T ... //How can I do that for generic types? type GenericModifier<T extends /* Generic<T> */> = T<...> //I want to do something like this: type Distribute<target, type> = type extends infer A ? target<A> : never; Is that possible?

How can I apply a linear transformation on sparse matrix in PyTorch?

In PyTorch, we have nn.linear that applies a linear transformation to the incoming data: y = WA+b In this formula, W and b are our learnable parameters and A is my input data matrix. The matrix 'A' for my case is too large for RAM to complete loading, so I use it sparsely. Is it possible to perform such an operation on sparse matrices using PyTorch?

Recursive query hangs then get "Error Code: 1030. Got error 1 - 'Operation not permitted' from storage engine" error

I'm trying to build a recursive query to enable me to find all future sports match records for the two players of a given match. In addition to this I need the query to return any match for any player that plays in any descendant match. To illustrate using some example data: match_id match_date p1_id p2_id 1 01/01/2022 1 2 2 02/01/2022 1 3 3 03/01/2022 3 4 4 04/01/2022 5 6 I only really need match_id so if the start match is match_id = 1 then I'm looking for the query to return 1 . The query should also return 2 because p1_id = 1 played in the start match. The query should also return 3 because p2_id = 3 played in match_id = 2 . I've written the following query: WITH RECURSIVE match_ids AS ( SELECT rt1.match_id, rt1.p1_id, rt1.p2_id, rt1.match_date FROM recursive_test_so AS rt1 WHERE rt1.match_id = 1 UNION ALL SELECT rt2.match_id, rt2.p1_id, rt2.p2_id, rt2.match_date FROM recursive_te...

Python: 2 Conditions - Read in characters Until x, and Count Vowels

Specification: Read in characters until the user enters a full stop '.'. Show the number of lowercase vowels. So far, I've succeeded in completing the read loop and printing out 'Vowel Count: '. However, vowel count always comes to 0. I've just started. I'm struggling with placement for 'Show the number of lowercase vowels' Should I define vowels = ... at the top? Or put it in a loop later? Do I create a new loop? I haven't been able to make it work. Thanks c = str(input('Character: ')) count = 0 while c != '.': count += 1 c = str(input('Character: ')) print("Vowel count =", count)

Javascript Stored Procedure Snowflake

I am working on SP which will look for table name defined in ARRAY across all database create view by union on same table name For example for Table A if present in DB 1 and DB 2 then create view by selecting records from both the db create or replace procedure PROC_1() returns VARCHAR -- return final create statement language javascript as $$ //given two db for testing var get_databases_stmt = "SELECT DATABASE_NAME FROM SNOWFLAKE.INFORMATION_SCHEMA.DATABASES WHERE DATABASE_NAME='TERRA_DB' OR DATABASE_NAME='TERRA_DB_2'" var get_databases_stmt = snowflake.createStatement({sqlText:get_databases_stmt }); var databases = get_databases_stmt.execute(); var row_count = get_databases_stmt.getRowCount(); var rows_iterated = 0; //table on which view will be created var results_table=['STAGE_TABLE','JS_TEST_TABLE]; var results_db=[]; while (databases.next()) { var database_name = database...

How to wait for the canvas fade in out to finish before saving the game?

this script make a canvas group alpha to change between 0 and 1 : using System.Collections; using System.Collections.Generic; using UnityEngine; using TMPro; public class Description : MonoBehaviour { public Canvas canvas; public AnimationCurve animationCurve; public float fadingSpeed = 5f; public TMP_InputField _inputField; public enum Direction { FadeIn, FadeOut }; private CanvasGroup canvasGroup; void Start() { if (canvas == null) canvas = GetComponent<Canvas>(); canvasGroup = canvas.GetComponent<CanvasGroup>(); if (canvasGroup == null) Debug.LogError("Please assign a canvas group to the canvas!"); if (animationCurve.length == 0) { Debug.Log("Animation curve not assigned: Create a default animation curve"); animationCurve = AnimationCurve.EaseInOut(0f, 0f, 1f, 1f); } } public void StartFading(bool InOut) { if (canvasG...

How to limit CPU numbers in Docker Client API?

I have a script using docker python library or Docker Client API . I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn't find the solution to achieve that. I know in docker, there is --cpus flag, but docker only has cpu_shares (int): CPU shares (relative weight) parameter to use. Does everyone have experience in setting the limit on cpu usage using docker ? import docker client = docker.DockerClient(base_url='unix://var/run/docker.sock') container = client.containers.run(my_docker_image, mem_limit=30G) Edits: I tried nano_cpus as what here suggests, like client.containers.run(my_docker_image, nano_cpus=10000000000) to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do parallel::detectCores() , it still shows 30, which I am confused. I also link R tag now. Thank you!

Invoking if/then within find/replace routine

I have a macro to perform various tech editing tasks in technical documents. One task is to ensure large numbers have commas in the correct locations. My routine to insert commas works fine, but also includes dates, street #s, etc (e.g., 15 January 2,022 and 1,234 Smith Street). I am now attempting to correct the street addresses using the routine below, but am doing something wrong with my looping. Currently, it is only finding/fixing the first instance of a street number with a comma in it, then it stops looping. Please note that the current code snippet below include several commented commands that I tried during my troubleshooting ... What am I missing? 'remove commas from street addresses Set oRange = ActiveDocument.Range With oRange.Find 'Set the search conditions .ClearFormatting .Text = "(<[0-9]{1,2})(,)([0-9]{3})" .Forward = True .Wrap = wdFindContinue .Format = False .MatchWildcards = True .Execute 'If .Foun...

Hide web component until browser knows what to do with it

Similar to this question: How to prevent flickering with web components? But different in that I can't just set the inner HTML to nothing until loaded because there is slotted content, and I don't wish to block rendering the page while it executes the web component JS. I thought I could add CSS to hide the element, and then the init of the webcomponent unhides itself, but then that CSS snippet needs to included where ever the web component is used, which is not very modular, and prone to be forgotten I am working on modal component, here's the code (although I don't think its particularly relevant: <div id="BLUR" part="blur" class="display-none"> <div id="DIALOGUE" part="dialogue"> <div id="CLOSE" part="close"> X </div> <slot></slot> </div> </div> const name = "wc-modal"; const template = documen...

Error while starting Log Stash Expected one of [ \\t\\r\\n]

Image
Connecting LogStash to SQL Server. Could help me with following error while starting logstash? I executed this command: logstash.bat -f c:\DevSoft\logstash-8.3.3\bin\logstash-sample.conf I get following error: I tried removing all whitespaces from .conf file but with no luck. [ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, > :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "if", [A-Za-z0-9_-], '"', "'", "}" at line 1, column 8 (byte 8) after input {", Here is logstash-sample.conf located in bin folder itself where logstash.bat is: input {​​​​​​​ jdbc {​​​​​​​ # jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mssql-jdbc-7.3.1.jre8-preview.jar" jdbc_driver_library => "C:\DevSoft\sqljdbc_11.2\enu\mssql-jdbc-11.2.0.jre11.jar" jdb...

Discord Bot Asynchronous Loop Failure

Please be patient with me. I'm actually not a coder and working with code that was left to us by someone who we can no longer contact and get help with. I've tried to do some research on my own, but I unfortunately can't extrapolate from other solutions to ours due to my insufficient skills. Part of the issue is also that our code is probably less than optimal and could be looped better by assigning i-values to each task, but I unfortunately don't know how to change it, so I'm just trying to find a solution with what we have. It was working on Heroku, but with their upcoming removal of their free services, we're looking to import it elsewhere and running into errors. Briefly, we have a very simple Discord bot whose purpose is to check certain channels and send a message when those channels have not had activity within a certain time period. In more detail, here is the general code for checking one of the channels: import discord import datetime import asyncio...

How to get all key and values from nested JSON in java

Hi I need to read all key, values from nested JSON, where ever there is inner JSON. I need that values ignoring the key.From below JSON i need Key values for nested JSON, like: responseStatus-passed, "statusCode":"200", "retrieveQuoteResponse":null,"quoteGuid":null, etc.ignoring the start key value like: responsePreamble, quoteProductList which has a nested json inside it. { "responsePreamble": { "responseStatus": "Passed", "statusCode": "200", "responseMessage": "Records Found" }, "retrieveQuoteResponse": null, "totalQuoteProductCount": 2, "quoteProductList": { "quoteGuid": null, "quantity": 180 } Code: ObjectReader reader = new ObjectMapper().readerFor(Map.class); Map<String, Map<String, String>> employeeMap = reader.readValue(jsonObject); for (Entry...

How to loop and index through file content in python and assign each line for different variable

If I have a file.txt And the content looks like this: BEGAN_SIT s_alis='HTTP_WSD' xps_entity='HTTP_S_ER' xlogin_mod='http' xdest_addr='sft.ftr.net' xmax_num='99' xps_pass='pass' xparam_nm='htp' #?SITE END How i I can loop through it and assign each line for different variable

Is there a way to manage col-sm children with justify-content-end parent?

I am trying to set 3 divs to the right of the screen, stacked horizontally, just to be stacked vertically on small screen. justify-content-end works perfectly on parent div until I use col-sm in children, then I lose the justification. Why would col-sm dismiss the use of justification? How can I solve this? <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.2/dist/css/bootstrap.min.css" integrity="sha384-xOolHFLEh07PJGoPkLv1IbcEPTNtaed2xpHsD9ESMhqIYd0nLMwNLD69Npy4HI+N" crossorigin="anonymous"> <div class="d-flex justify-content-end"> <div class="order-1 p-2">Some action 1</div> <div class="order-2 p-2">Another action 2</div> <div class="order-3 p-2">Triple divs 3</div> </div> The code above works and justifies perfectly, but does not set items vertically stacked on small screens. The code below must do it but it just wo...

How to pass argument to onclick callback - yew function component

How do I correctly pass an argument to an onclick event handler in a yew function component? What I have: #[function_component(Calculator)] pub fn calulator() -> Html { let navigator = use_navigator().unwrap(); let handle_formula_click = Callback::from(move |_| { navigator.push(&AppRoute::Formula { id }) }); html! { <div> ... <button onclick={handle_formula_click}> ... </button> ... </div> } } I would like to pass in a string to the handle_formula_click callback What I want: #[function_component(Calculator)] pub fn calulator() -> Html { let navigator = use_navigator().unwrap(); let handle_formula_click = Callback::from(move |id: String| { navigator.push(&AppRoute::Formula { id }) }); html! { <div> ... <button onclick={handle_formula...

Use REGEXEXTRACT() to extract only upper case letters from a sentence in Google sheets

I'm thinking this should be basic but having tried a number of things I'm nowhere nearer a solution: I have a list of names and want to extract the initials of the names in the next column: Name (have) Initials (want) John Wayne JW Cindy Crawford CC Björn Borg BB Alexandria Ocasio-Cortez AOC Björk B Mesut Özil MÖ Note that some of these have non-English letters and they may also include hyphens. Using REGEXMATCH() I've been able to extract the first initial but that's where it stops working for me. For example this should work according to regex101 : =REGEXEXTRACT(AH2, "\b[A-Z]+(?:\s+[A-Z]+)*") but only yields the first letter.

Word VBA copy text formatted text in a certain font to a file and other formatting in other file

From a comparison docx file I need to extract into two word files the text formatted as strikethrough in one docx file and the text formatted as double underline in another docx file to be able to perform the wordcount of newly inserted and deleted text separately. To do this, I wrote this macro, that actually activates the correct files, but only copies and pastes the formatting resulting from the first search. Sub WSC_extraction_for_wordcount() 'This macro extracts double underlined text to the file "target_ins" 'This macro extracts strikethrough text to the file "target_del" Application.ScreenUpdating = False Selection.HomeKey Unit:=wdStory Selection.Find.ClearFormatting 'STRIKETHROUGH processing Do With Selection.Find.Font .StrikeThrough = True 'Then Selection.Find.Execute FindText:="", Forward:=True, Format:=True Selection.Cut Windows("target_del.docx").Activate Selec...

Why do I get AttributeError: type object 'Placeholder' has no attribute 'loads', when running pyisntaller?

I am using Python 3.10.6, pip 22.2.2 on Windows 11 I have a program which uses yfinance to grab stock data and sklearn.svr to predict stock data. I want to turn this program into a .exe file using pyisntaller. Pyinstaller finished and the .exe file is created but when I want to run it i get: File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module File "requests_cache\__init__.py", line 7, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module File "requests_cache\backends\__init__.py", line 7, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importl...

How to calculate days between two dates from different sheets where the emails from both sheets in row match?

I have 2 sheets in same workbook that have dates that I need to calculate the number of days between the two dates. There is a common identifier in both sheets rows being email. If there is 0 days between dates it should state 0 and a date is missing it should showing blank "" https://docs.google.com/spreadsheets/d/1tigqy4hKFn0Q7c-3ICyI6WREnsIBFZ2oLO8CEcdxe8w/edit?usp=sharing results: Sheet1!J:2 start date Sheet1!D:D End date: Day_count!B:B Matching identifier = email in Col1 on both sheets What would be the best way to work this out without using a helper column Answer This works when the lookup isn't in Col1 ={"Day count"; ARRAYFORMULA(IFNA(DAYS(VLOOKUP(A2:A, {Days_count!B2:B, Days_count!C2:C}, 2, 0), D2:D)))}

Where should RBAC be implemented?

To give you some background, I have frequently worked with RBAC implemented on the SQL level, but I read in some articles that it might not be very scalable. Should RBAC be implemented on, say: On the Database level (i.e. row or column based access control) On the Application level (i.e. logic in the code) perhaps with some document storage support On some other level What are the pros and cons of each approach in terms of scalability and what is the gold industry standard?

Track how much time the client (Angular) call is taking to hit the API controller

I want to create a performance tool to track the time taking to the reach the call to API controller and then to the different layers of application and DB. When I am using the UTC datetime, faced an issue. The server UTC time is 5 seconds less than client UTC time (both client and server are in same time zone) eg : Request sent at 07:10:05 AM and it reached at 07:10:01 . So, if the server time is not correct, taking UTC time also will give wrong time.right? Is there any other ideas to implement this requirement?

How can I store one to many in PostgreSQL via array of alt keys?

I have two classes - Role and PosUser. public class Role : IEntity { public string Name { get; set; } [Column(TypeName = "jsonb")] public string[] Permissions { get; set; } public bool IsProtected { get; set; } public uint Priority { get; set; } #region IEntity #endregion } public class PosUser : IEntity { public string Name { get; set; } public List<Role> Roles { get; set; } #region IEntity #endregion } I want to have two tables on each of these entitites. Roles should not know anything about Users, but every User should store jsonb array of role's names like ["Admin", "Test"] I tried to use: protected override void OnModelCreating(ModelBuilder builder) { builder.Entity<Role>().HasAlternateKey(x => x.Name); builder.Entity<PosUser>().Property(u => u.Roles) .HasPostgresArrayConversion(r => r.Name, name => Find<Role>(name)); ...

JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata" What is the solution?

enter image description here#hibernate properties spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MYSQLInnoDBDialect spring.jpa.hibernate.ddl-auto=update logging.level.org.hibernate.SQL=DEBUG

Jfrog plan available in Azure marketplace with supported NPM package

Which Jfrog plan available in Azure marketplace with supported NPM package including low cost please suggested.

Jfrog Artifactory remote repository with certificates

Server: Ubuntu 20.04 Jfrog Artifactory: 7.39.10 we have a lot off remote repos with certificate auths --> redhat. Always if we automatically reboot the artifactory server the remote repos have problems to connect to redhat. But very curious if i go to the config menu from one remote repo and do any little change (no matter what) and immediately all remote repos can connect to redhat again. Has anyone an idea how it can be done?

antMatcher not working with antMatchers security config

I am working on Spring Boot security config where I want one of the URL to be excluded from security filter. URL format: URL/v1/btob/** . To be excluded URL format: URL/v1/btob/icici/pay Here's my configure method: @Override public void configure(HttpSecurity http) throws Exception { http .csrf().disable(); http .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS); http .antMatcher("/v1/btob/**") .httpBasic() .and() .csrf().disable() .sessionManagement()# .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .cors() .and() .authorizeRequests() .antMatchers(HttpMethod.POST, "/icici/pay").permitAll() .anyRequest().authenticated() .and() .addFilterBefore(btoBFilter, UsernamePasswordAuthenticationFilter.class); } @Override public ...

R improve loop efficiency: Operating on columns that correspond to rows in a second dataframe

I have two data frames: dat <- data.frame(Digits_Lower = 1:5, Digits_Upper = 6:10, random = 20:24) dat #> Digits_Lower Digits_Upper random #> 1 1 6 20 #> 2 2 7 21 #> 3 3 8 22 #> 4 4 9 23 #> 5 5 10 24 cb <- data.frame(Digits = c("Digits_Lower", "Digits_Upper"), x = 1:2, y = 3:4) cb #> Digits x y #> 1 Digits_Lower 1 3 #> 2 Digits_Upper 2 4 I am trying to perform some operation on multiple columns in dat similar to these examples: In data.table: iterating over the rows of another data.table and R multiply columns by values in second dataframe . However, I am hoping to operate on these columns with an extended expression for every value in its corresponding row in cb . The solution should be applicable for a ...

infinite loop with react/redux

I have tiredlessly tried everything i can find on stack for this issue and am getting no where. We are using react/typescript. redux, and saga. I have a list of categories to bring back for nav list and using useEffect to dispatch the action to redux store. our tsx.file: const dispatch = useDispatch(); const categories = useSelector((state) => state?.categories?.payload); const loadCategories = () => { dispatch(getCategories(categories)); }; useEffect(() => { loadCategories(); }, []); {categories?.map((x, index) => ( <Link href={"/store/" + `${x.name}` + "/s"}> <a type="button" id={`${x.name}`} title={`${x.name}`} className={"xl:px-3 px-2 py-[1.15rem] font-normal"}> {x.name} </a> </Link> ))} Network traffic just shows hundreds of requests going out to the category endpoint -- stumped! still stuck so ad...

SVM problem - name 'model_SVC' is not defined

I have a problem with this code: from sklearn import svm model_SVC = SVC() model_SVC.fit(X_scaled_df_train, y_train) svm_prediction = model_SVC.predict(X_scaled_df_test) The error message is NameError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_14392/1339209891.py in ----> 1 svm_prediction = model_SVC.predict(X_scaled_df_test) NameError: name 'model_SVC' is not defined Any ideas?

Error response from daemon: manifest for abhishek8054/token-app:latest not found: manifest unknown: manifest unknown

I had made my own Docker image that is a simple react app and pushed it onto the docker hub. Now I am trying to pull my image in system then it shows me an error Error response from daemon: manifest for abhishek8054/token-app:latest not found: manifest unknown: manifest unknown". I am doing something wrong. My Dockerfile code is: FROM node:16-alpine WORKDIR /app/ COPY package*.json . RUN npm install COPY . . EXPOSE 3000 CMD ["npm","start"] And I made the image from the following command: docker image build -t abhishek8054/token-app:latest . And pushed my image with the following command: docker push abhishek8054/token-app:latest And pulled it again with the following command: docker pull abhishek/8054/token-app And it gives me the error above.

merging df in pandas based on "contains" values

I have 2 dfs df_1 Nº.do Incidente Status Description Per_Extracao 0 IN6948271 ENCERRADO GR26 D.I.T.I. >>> ABEND NO JOB PP_SASG_GD9822... DE : 2022/01/05 ATÉ : 2022/12/08 1 IN6948304 ENCERRADO GR26 D.I.T.I. >>> ABEND NO JOB PP_AACE_R4539 ... DE : 2022/01/05 ATÉ : 2022/12/08 2 IN6948307 ENCERRADO GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX... DE : 2022/01/05 ATÉ : 2022/12/08 3 IN6948309 ENCERRADO GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX... DE : 2022/01/05 ATÉ : 2022/12/08 4 IN6948310 ENCERRADO GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX... DE : 2022/01/05 ATÉ : 2022/12/08 5 IN6948311 ENCERRADO GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX... DE : 2022/01/05 ATÉ : 2022/12/08 df_2 JOB_NAME JOB_STREAM_NAME 0 PP_AACD_NR_D8706_TIHIBRIDA_PROC_EXCUC_D P26_AACD_FAC_TOD 1 PP_SASG_GD9822 P26_AACE_U08 2 PP_AACE_R4539 P26_AACE_U09 3 PP_AACE_R4539_CONS_JUNC P26_A...

tf.cast not changing the dtype ORIGINAL ISSUE:tensorflowjs Error: Argument 'x' passed to 'conv2d' must be float32 tensor, but got int32 tensor

I'm trying to load a model I develop in tensorflow (Python) with tensorflowjs and make prediction for an input test, as follow: tf_model = await tf.loadGraphModel('http://localhost:8080/tf_models/models_js/model/model.json') let test_output = await tf_model.predict(tf.tensor2d([0.0, -1.0, 1.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1, 9], 'float32')) console.log("[Test tf model]:", test_output.arraySync()) I'm getting this error in the js console at tf_model.predict Error: Argument 'x' passed to 'conv2d' must be float32 tensor, but got int32 tensor even if the input of the Conv2D Layer is of type float32 in the model definition inputs = tf.keras.layers.Input((9)) # One-Hot encoding x = tf.cast(tf.one_hot(tf.cast(inputs + 1, tf.int32), 3), tf.float32) x = tf.reshape(x, (-1, 3, 3, 3)) x = tf.keras.layers.Conv2D( filters=3**5, kernel_size=(3, 3), kernel_regularizer=kernel_regularizer )(x) Anybody knows why this could happen?...

WPML - SQL query to get post by current language

I'm using WPML and need a custom search query (The default search of wordpress is not working in my case). Let say my query is: "SELECT * FROM wp_posts WHERE post_title LIKE %s% AND [WPML_CURRENT_LANGUAGE_CONDITION]" ; Please help me check which table I should care for this. Thanks!

TF WideDeepModel - Shape Error when Passing Different Features for Wide and Deep Models

I am attempting to recreate the Wide and Deep model using Tensorflow's WideDeepModel library; however, I am encountering an issue when attempting to differentiate between the wide model inputs and the deep model inputs. Referenced below is the code that I am using. # Create LinearModel and DNN Model as in Examples 1 and 2 optimizer = tf.keras.optimizers.Ftrl( l1_regularization_strength=0.001, learning_rate=tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=0.1, decay_steps=10000, decay_rate=0.9)) linear_model = tf.compat.v1.keras.experimental.LinearModel() linear_model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy']) linear_model.fit(X_train[wideInputs], y_train, epochs=50) dnn_model = tf.keras.models.Sequential([ tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(1) ]) dnn_model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy...

Error in last 3 months calculation in PowerBi

calculating the last 3months or average was working in one PBIX file. Now when I try to use it in another it comes up empty. Naming seems to be right and the Calendar is a copy from the working PBIX. What can have gone wrong between the two files, is there something obvious you can see? Is there something I should check and possibly change? SLS L3m (USD) = var months = 3 var sum_period = CALCULATE( [Sales (USD)], DATESINPERIOD('Calendar'[Date] , FIRSTDATE('Calendar'[Date])+1 , -months, MONTH ) ) return IF( NOT(ISBLANK([Sales (USD)])), sum_period) INV avg L12m (USD) = var months = 12 var sum_period = CALCULATE( [Inventory (USD)], DATESINPERIOD('Calendar'[Date] , LASTDATE('Calendar'[Date]) , -months, MONTH ) ) return IF( NOT( ISBLANK( [Inventory (USD)] )), sum_period/months ) enter image description here

java.lang.NoSuchFieldError: Companion when using `influx-client-reactive` and `quarkus`

Error occurs when instantiating a client InfluxDBClientReactive influxDBClient = InfluxDBClientReactiveFactory.create( influxConf.url(), influxConf.username(), influxConf.password().toCharArray()); dependency is excluded from quarkus-bom implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}") { exclude group: "com.squareup.okhttp3", module: "okhttp" } implementation "com.influxdb:influxdb-client-reactive:6.4.0" otherwise (3.x.x) is forced and would cause 'okhttp3.RequestBody okhttp3.RequestBody.create(java.lang.String, okhttp3.MediaType)' java.lang.NoSuchMethodError: 'okhttp3.RequestBody okhttp3.RequestBody.create(java.lang.String, okhttp3.MediaType) at the same line. trace: Companion java.lang.NoSuchFieldError: Companion at okhttp3.internal.Util.<clinit>(Util.kt:70) at okhttp3.HttpUrl$Builder.parse$okhttp(Htt...

How do I map a Comparator

I have a comparator of type Comparator<Integer> and a function Function<Pair<Integer,?>,Integer> expressed as Pair::left (that returns an Integer ). I need to obtain a comparator of type Comparator<Pair<Integer,?>> . If I wanted to simply map a function Function<T,U> to a resulting function Function<T,V> though a function Function<U,V> I could simply apply andThen() method like this: Function<Integer, String> toBinary = Integer::toBinaryString; Function<Pair<Integer, ?>, Integer> left = Pair::left; var pairToBinary = left.andThen(toBinary); // has type Function<Pair<Integer, ?>, String> Is it possible to obtain Comparator<Pair<Integer,?>> in a similar way?

UICollectionView: make first item's width different from the rest

Image
I'm currently trying to achieve the following layout using NSCollectionLayoutSection . Do you have any advice on only making the first item 50px wide while keeping the rest of the items 100px (could be any number of items)? The solution has to be an NSCollectionLayoutSection . I'm currently displaying them in the same width using which is not the desired result: let item = NSCollectionLayoutItem(layoutSize: .init(widthDimension: .fractionalWidth(1.0), heightDimension: .fractionalHeight(1.0))) item.contentInsets = NSDirectionalEdgeInsets(top: 0, leading: 0, bottom: 0, trailing: 8) let group = NSCollectionLayoutGroup.horizontal(layoutSize: NSCollectionLayoutSize(widthDimension: .absolute(100), heightDimension: .absolute(100)), subitems: [item]) let section = NSC...

How to fix the number of decimals to 500 digits output in Python?

In the following example: import math x = math.log(2) print("{:.500f}".format(x)) I tried to get 500 digits output I get only 53 decimals output of ln(2) as follows: 0.69314718055994528622676398299518041312694549560546875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 How I can fix this problem?

Parse p nodes text including sibling nodes until the next p node

Weird title, I know. I am trying to parse an XML document which is kind of structured in paragraph. However, sometimes there are additional nodes which should be inside a paragraph but simply aren't. What I need is to find each paragraph but also select everything until the next paragraph up to a "termination" node e.g. which is here the title node. Here's an example: <p typ="ct">(1) This is rule one</p> <ol> <li>With some text</li> <li>that I want to parse</li> </ol> <p typ="ct">(2) And here is rule two</p> <p typ="ct">(3) and rule three</p> <title>Another section</title> My desired output would be something like: [ "(1) This is rule one\nWith some text\nthat I want to parse", "(2) And here is rule two", "(3) and rule three" ] If know I can select each paragraph using something like soup.select("p[typ=...

ModuleNotFoundError in PIP package install in Conda Environment

I am trying to install a package in a new conda environment using the pip command. It installs, but with errors, and I get ModuleNotFoundError in the IDE. The steps: conda create --name facebookscraper python=3.8 all goes well conda activate facebookscraper all goes well conda install pip all goes well pip install facebook-scraper it installs, but at the end of the installation I get multiple WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/XYZPackageName already exists : facebookscraper) macbook@macbook ~ % pip install facebook-scraper Collecting facebook-scraper Using cached facebook_scraper-0.2.58-py3-none-any.whl (44 kB) Collecting demjson3<4.0.0,>=3.0.5 Using cached demjson3-3.0.5-py3-none-any.whl Collecting dateparser<2.0.0,>=1.0.0 Using cached dateparser-1.1.1-py2.py3-none-any.whl (288 kB) Collecting requests-html<0.11.0,>=0.10.0 WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/tzlocal already exists. Specify --u...

FluentValidation ILanguageManager.GetString() not invoked for custom Rules

I have a custom rule like this: public static IRuleBuilderOptionsConditions<T, string?> MustBeCool<T>(this IRuleBuilder<T, string?> ruleBuilder) { return ruleBuilder.Custom((input, context) => { if(/*input is not cools*/) { context.AddFailure("Not cool."); } }); } I also have a custom implementation of the ILanguageManager which pulls translations for validation messages from a database. My custom LanguageManager works fine for built-in rules. My problem now is that the ILanguageManager.GetString(...) is not getting called for my custom rule. I guessed that this might be because there already is a valiation error message provided so I tried to add the failure like this: context.AddFailure(new ValidationFailure { PropertyName = context.PropertyName, ErrorCode = "MustBeCoolValidator" // no error message provided }); That doesn't work either. An empty validation error message is ...

DataContractSerializer fails for List

I have changed my serialization to DataContracts but now I am having problem with a specific class. It works fine on my Mac, but not on my android devices when built using IL2CPP. The thread stops at the writeObject function. My three classes related to the error: [DataContract] [KnownType(typeof(TaskIdentifier))] [KnownType(typeof(TraceableTaskItem))] [KnownType(typeof(List<TraceableTaskItem>))] public class TraceableTaskContainer { [DataMember] protected TaskIdentifier _taskIdent; [DataMember] protected List<TraceableTaskItem> _lNotAccomplishedTaskItems = new List<TraceableTaskItem>(); //..... } [DataContract] [KnownType(typeof(DateTime))] [KnownType(typeof(ItemReviewStage))] public class TraceableTaskItem : GenericTaskItem, IEquatable<TraceableTaskItem>, IComparable<TraceableTaskItem> { [DataMember] public string sDisplayTextInTraceableTaskReport; [DataMember] protected DateTime NextReviewDate; [DataMember] //It...

Pandas Similar rows Search

How would I filter data on multiple criteria through the spreadsheet using python(pandas)? I am trying to filter transactions with all Curr1=USD , where Trade Time within 1 minute , Have the same Notional 1 and have the Price within .5% spread between transactions. Then the row with the furthest(highest) Maturity would be moved to a different Sheet in excel. Example of the data: GoogleDrive Excel File Thank you in advance!

Improve gremlin traversal query performance

I would like to start from a particular source node ( id = 01F546'), traverse both direction for x number (4 in the below example) of hops , and list the properties of the first 200 destination nodes meeting certain creteria ( 'type' = output' in the below sample). I have set up the timeLimt to make sure the query return before time out. I have created composite/mixed indexes on 'id', 'type' each. For a graph of 250k nodes and 400k edges, the above query takes about ~7 second via gremline query console. What can be done to speed up the performance? Thank you Gremlin query & profile() results are as below g.V(). has('id', eq('01F546')).emit(). repeat(bothE().otherV().timeLimit(300000)).times(4). has('type', eq('output')). map(properties().group().by(key()).by(value())). dedup(). limit(200). toList() The output of the profile is: HasStep([type.eq(output)]) 1...